Wayback Machine
92 captures
23 Mar 2016 - 10 Nov 2024
Oct NOV Dec
10
2023 2024 2025
success
fail
About this capture
COLLECTED BY
Organization: Archive Team
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.

The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.

This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.

Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.

The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.

Collection: ArchiveBot: The Archive Team Crowdsourced Crawler
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).

To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.

There is a dashboard running for the archivebot process at http://www.archivebot.com.

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot.

TIMESTAMPS
loading
The Wayback Machine - https://web.archive.org/web/20241110161319/https://dzone.com/users/298425/gonzalo123.html
DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

Gonzalo Ayuso

CEO at gonzalo123 @gonzalo123

San Sebastián, ES

Joined Jun 2008

About

Gonzalo Ayuso is a Web Architect with more than 10 year of experience in the web development, specialized in Open Source technologies. Experienced delivering scalable, secure and high performing web solutions to large scale enterprise clients. Blogs at gonzalo123.com.

Stats

Reputation: 851
Pageviews: 1.2M
Articles: 13
Comments: 6
  • Articles
  • Comments

Articles

article thumbnail
Transforming TCP Sockets to HTTP With Go
Use Go to help your apps communicate!
May 2, 2021
· 11,350 Views · 2 Likes
article thumbnail
Playing With TOTP (2FA) and Mobile Applications With Ionic
Create two factor authentication with TOTP and Ionic.
August 7, 2019
· 4,938 Views · 1 Like
article thumbnail
Playing With Grafana and Weather APIs
Want to learn how to use weather APIs with Grafana? Check out this tutorial on how to use the BeeWi temperature sensor and OpenWeatherMap API.
July 24, 2018
· 11,519 Views · 3 Likes
article thumbnail
Control Humidity With a Raspberry Pi and IoT Devices
In this post we take a look at how you can control the humidity in a room using a Raspberry Pi, a switch, and a sensor with a dash of JavaScript and Python.
April 4, 2017
· 11,781 Views · 5 Likes
article thumbnail
Notify Events From PostgreSQL to External Listeners
What happens when you need to call external programs from a PostgreSQL database? Read on for a solution.
July 6, 2016
· 29,575 Views · 4 Likes
article thumbnail
Sharing Authentication Between Socket.io and a PHP Frontend (Using JSON Web Tokens)
Learn the steps and code to create an authentication link between Socket.io and a PHP frontend using JSON Web Tokens.
June 8, 2016
· 9,782 Views · 1 Like
article thumbnail
Generating push notifications with Pushbullet and Silex
It's extremely easy to create a service provider and generate push notifications in your mobile app using Pushbullet. Here's how to do it with the Silex PHP framework.
August 25, 2015
· 4,358 Views · 1 Like
article thumbnail
Microservice Container with Guzzle
This days I’m reading about Microservices. The idea is great. Instead of building a monolithic script using one language/framowork. We create isolated services and we build our application using those services (speaking HTTP between services and application). That’s means we’ll have several microservices and we need to use them, and maybe sometimes change one service with another one. In this post I want to build one small container to handle those microservices. Similar idea than Dependency Injection Containers. As we’re going to speak HTTP, we need a HTTP client. We can build one using curl, but in PHP world we have Guzzle, a great HTTP client library. In fact Guzzle has something similar than the idea of this post: Guzzle services, but I want something more siple. Imagine we have different services: One Silex service (PHP + Silex) use Silex\Application; $app = new Application(); $app->get('/hello/{username}', function($username) { return "Hello {$username} from silex service"; }); $app->run(); Another PHP service. This one using Slim framework use Slim\Slim; $app = new Slim(); $app->get('/hello/:username', function ($username) { echo "Hello {$username} from slim service"; }); $app->run(); And finally one Python service using Flask framework from flask import Flask, jsonify app = Flask(__name__) @app.route('/hello/') def show_user_profile(username): return "Hello %s from flask service" % username if __name__ == "__main__": app.run(debug=True, host='0.0.0.0', port=5000) Now, with our simple container we can use one service or another use Symfony\Component\Config\FileLocator; use MSIC\Loader\YamlFileLoader; use MSIC\Container; $container = new Container(); $ymlLoader = new YamlFileLoader($container, new FileLocator(__DIR__)); $ymlLoader->load('container.yml'); echo $container->getService('flaskServer')->get('/hello/Gonzalo')->getBody() . "\n"; echo $container->getService('silexServer')->get('/hello/Gonzalo')->getBody() . "\n"; echo $container->getService('slimServer')->get('/hello/Gonzalo')->getBody() . "\n"; And that’s all. You can see the project in my github account.
July 2, 2015
· 3,117 Views
article thumbnail
Building a Simple TCP Proxy Server with node.js
Today we're going to build a simple TCP proxy server. The scenario: we've got one host (the client) that establishes a TCP connection to another one (the remote). client —> remote We want to set up a proxy server in the middle, so the client will establish the connection with the proxy and the proxy will forward it to the remote, keeping in mind the remote response also. With node.js is really simple to perform those kind of network operations. client —> proxy -> remote var net = require('net'); var LOCAL_PORT = 6512; var REMOTE_PORT = 6512; var REMOTE_ADDR = "192.168.1.25"; var server = net.createServer(function (socket) { socket.on('data', function (msg) { console.log(' ** START **'); console.log('<< From client to proxy ', msg.toString()); var serviceSocket = new net.Socket(); serviceSocket.connect(parseInt(REMOTE_PORT), REMOTE_ADDR, function () { console.log('>> From proxy to remote', msg.toString()); serviceSocket.write(msg); }); serviceSocket.on("data", function (data) { console.log('<< From remote to proxy', data.toString()); socket.write(data); console.log('>> From proxy to client', data.toString()); }); }); }); server.listen(LOCAL_PORT); console.log("TCP server accepting connection on port: " + LOCAL_PORT); Simple, isn’t it? Source code in github
September 20, 2012
· 22,267 Views
article thumbnail
Building A Simple API Proxy Server with PHP
these days i’m playing with backbone and using public api as a source. the web browser has one horrible feature: it don’t allow you to fetch any external resource to our host due to the cross-origin restriction. for example if we have a server at localhost we cannot perform one ajax request to another host different than localhost. nowadays there is a header to allow it: access-control-allow-origin . the problem is that the remote server must set up this header. for example i was playing with github’s api and github doesn’t have this header. if the server is my server, is pretty straightforward to put this header but obviously i’m not the sysadmin of github, so i cannot do it. what the solution? one possible solution is, for example, create a proxy server at localhost with php. with php we can use any remote api with curl (i wrote about it here and here for example). it’s not difficult, but i asked myself: can we create a dummy proxy server with php to handle any request to localhost and redirects to the real server, instead of create one proxy for each request?. let’s start. problably there is one open source solution (tell me if you know it) but i’m on holidays and i want to code a little bit (i now, it looks insane but that’s me ). the idea is: ... $proxy->register('github', 'https://api.github.com'); ... and when i type: http://localhost/github/users/gonzalo123 and create a proxy to : https://api.github.com/users/gonzalo123 the request method is also important. if we create a post request to localhost we want a post request to github too. this time we’re not going to reinvent the wheel, so we will use symfony componets so we will use composer to start our project: we create a conposer.json file with the dependencies: { "require": { "symfony/class-loader":"dev-master", "symfony/http-foundation":"dev-master" } } now php composer.phar install and we can start coding. the script will look like this: register('github', 'https://api.github.com'); $proxy->run(); foreach($proxy->getheaders() as $header) { header($header); } echo $proxy->getcontent(); as we can see we can register as many servers as we want. in this example we only register github. the application only has two classes: restproxy , who extracts the information from the request object and calls to the real server through curlwrapper . request = $request; $this->curl = $curl; } public function register($name, $url) { $this->map[$name] = $url; } public function run() { foreach ($this->map as $name => $mapurl) { return $this->dispatch($name, $mapurl); } } private function dispatch($name, $mapurl) { $url = $this->request->getpathinfo(); if (strpos($url, $name) == 1) { $url = $mapurl . str_replace("/{$name}", null, $url); $querystring = $this->request->getquerystring(); switch ($this->request->getmethod()) { case 'get': $this->content = $this->curl->doget($url, $querystring); break; case 'post': $this->content = $this->curl->dopost($url, $querystring); break; case 'delete': $this->content = $this->curl->dodelete($url, $querystring); break; case 'put': $this->content = $this->curl->doput($url, $querystring); break; } $this->headers = $this->curl->getheaders(); } } public function getheaders() { return $this->headers; } public function getcontent() { return $this->content; } } the restproxy receive two instances in the constructor via dependency injection (curlwrapper and request). this architecture helps a lot in the tests , because we can mock both instances. very helpfully when building restproxy. the restproxy is registerd within packaist so we can install it using composer installer: first install componser curl -s https://getcomposer.org/installer | php and create a new project: php composer.phar create-project gonzalo123/rest-proxy proxy if we are using php5.4 (if not, what are you waiting for?) we can run the build-in server cd proxy php -s localhost:8888 -t www/ now we only need to open a web browser and type: http://localhost:8888/github/users/gonzalo123 the library is very minimal (it’s enough for my experiment) and it does’t allow authorization. of course full code is available in github .
September 2, 2012
· 18,790 Views
article thumbnail
5 Things You Should Check Now to Improve PHP Web Performance
We all know how financially important it is for your app’s server architecture to handle peaks of load. This article discusses 5 tips for improving PHP Web performance.
July 11, 2012
· 262,382 Views · 2 Likes
article thumbnail
Working with Request objects in PHP
normally when we work with web applications we need to handle request objects. requests are the input of our applications. according to the golden rule of security: filter input-escape output we cannot use $_get and $_post superglobals. ok we can use then but we shouldn’t use them. normally web frameworks do this work for us, but not all is a framework. recently i have worked in a small project without any framework. in this case i also need to handle request objects. because of that i have built this small library. let me show it. basically the idea is the following one. i want to filter my inputs, and i don’t want to remember the whole name of every input variables. i want to define the request object once and use it everywhere. imagine a small application with a simple input called param1. the url will be: test1.php?param1=11212 and we want to build this simple script: echo "param1: " . $_get['param1'] . ''; the problem with this script is that we aren’t filtering input. and we also need to remember the parameter name is param1. if we need to use param1 parameter in another place we need to remember its name is param1 and not param1 or para1. it can be obvious but it’s easy to make mistakes. my proposal is the following one. i create a simple php class called request1 extending requestobject object: example 1: simple example class request1 extends requestobject { public $param1; } now if we create an instance of request1, we can use the following code: $request = new request1(); echo "param1: " . $request->param1 . ''; i’m not going to explain the magic now, but with this simple script we will filter the input to the default type (string) and we will get the following outcomes: test1.php?param1=11212 param1: 11212 test1.php?param1=hi param1: hi maybe is hard to explain with words but with examples it’s more easy to show you what i want: example 2: data types and default values class request2 extends requestobject { /** * @cast string */ public $param1; /** * @cast string * @default default value */ public $param2; } $request = new request2(); echo "param1: "; var_dump($request->param1); echo ""; echo "param2: "; var_dump($request->param2); echo ""; now we are will filter param1 parameter to string and param2 to string to but we will assign a default variable to the parameter if we don’t have a user input. test2.php?param1=hi&param2=1 param1: string(2) "hi" param2: string(1) "1" test2.php?param1=1&param2=hi param1: string(1) "1" param2: string(2) "hi" test2.php?param1=1 param1: string(1) "1" param2: string(13) "default value" example 3: validadors class request3 extends requestobject { /** @cast string */ public $param1; /** @cast integer */ public $param2; protected function validate_param1(&$value) { $value = strrev($value); } protected function validate_param2($value) { if ($value == 1) { return false; } } } try { $request = new request3(); echo "param1: "; var_dump($request->param1); echo ""; echo "param2: "; var_dump($request->param2); echo ""; } catch (requestobjectexception $e) { echo $e->getmessage(); echo ""; var_dump($e->getvalidationerrors()); } now a complex example. param1 is a string and param2 is an integer, but we also will validate them. we will alter the param1 value (a simple strrev ) and we also will raise an exception if param2 is equal to 1 test3.php?param2=2&param1=hi param1: string(2) "ih" param2: int(2) test3.php?param1=hola&param2=1 validation error array(1) { ["param2"]=> array(1) { ["value"]=> int(1) } } example 4: dynamic validations class request4 extends requestobject { /** @cast string */ public $param1; /** @cast integer */ public $param2; } $request = new request4(false); // disables perform validation on contructor // it means it will not raise any validation exception $request->appendvalidateto('param2', function($value) { if ($value == 1) { return false; } }); try { $request->validateall(); // now we perform the validation echo "param1: "; var_dump($request->param1); echo ""; echo "param2: "; var_dump($request->param2); echo ""; } catch (requestobjectexception $e) { echo $e->getmessage(); echo ""; var_dump($e->getvalidationerrors()); } more complex example. param1 will be cast as string and param2 as integer again, same validation to param2 (exception if value equals to 1), but now validation rule won’t be set in the definition of the class. we will append dynamically after the instantiation of the class. test4.php?param1=hi&param2=2 param1: string(4) "hi" param2: int(2) test4.php?param1=hola&param2=1 validation error array(1) { ["param2"]=> array(1) { ["value"]=> int(1) } } example 5: arrays and default params class request5 extends requestobject { /** @cast arraystring */ public $param1; /** @cast integer */ public $param2; /** * @cast arraystring * @defaultarray "hello", "world" */ public $param3; protected function validate_param2(&$value) { $value++; } } $request = new request5(); echo "param1: "; var_dump($request->param1); echo "param2: "; var_dump($request->param2); echo "param3: "; var_dump($request->param3); now a simple example but input parameters allow arrays and default values. test5.php?param1[]=1&param1[]=2&param2[]=hi param1: array(2) { [0]=> int(1) [1]=> int(2) } param2: int(1) param3: array(2) { [0]=> string(5) "hello" [1]=> string(5) "world" } test5.php?param1[]=1&param1[]=2&param2=2 param1: array(2) { [0]=> string(1) "1" [1]=> string(1) "2" } param2: int(3) param3: array(2) { [0]=> string(5) "hello" [1]=> string(5) "world" } requestobject the idea of requestobject class is very simple. when we create an instance of the class (in the constructor) we filter the input request (get or post depending on request_method) with filter_var_array and filter_var functions according to the rules defined as annotations in the requestobject class. then we populate the member variables of the class with the filtered input. now we can use to the member variables, and auto-completion will work perfectly with our favourite ide with the parameter name. ok. i now. i violate encapsulation principle allowing to access directly to the public member variables. but imho the final result is more clear than creating an accessor here. but if it creeps someone out, we would discuss another solution . full code here on github what do you think?
October 18, 2011
· 8,766 Views
article thumbnail
Real time monitoring PHP applications with websockets and node.js
The inspection of the error logs is a common way to detect errors and bugs. We also can show errors on-screen within our developement server, or we even can use great tools like firePHP to show our PHP errors and warnings inside our firebug console. That’s cool, but we only can see our session errors/warnings. If we want to see another’s errors we need to inspect the error log. tail -f is our friend, but we need to surf against all the warnings of all sessions to see our desired ones. Because of that I want to build a tool to monitor my PHP applications in real-time. Let’s start: What’s the idea? The idea is catch all PHP’s errors and warnings at run time and send them to a node.js HTTP server. This server will work similar than a chat server but our clients will only be able to read the server’s logs. Basically the applications have three parts: the node.js server, the web client (html5) and the server part (PHP). Let me explain a bit each part: The node Server Basically it has two parts: a http server to handle the PHP errors/warnings and a websocket server to manage the realtime communications with the browser. When I say that I’m using websockets that’s means the web client will only work with a browser with websocket support like chrome. Anyway it’s pretty straightforward swap from a websocket sever to a socket.io server to use it with every browser. But websockets seems to be the future, so I will use websockets in this example. The http server: http.createServer(function (req, res) { var remoteAdrress = req.socket.remoteAddress; if (allowedIP.indexOf(remoteAdrress) >= 0) { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Ok\n'); try { var parsedUrl = url.parse(req.url, true); var type = parsedUrl.query.type; var logString = parsedUrl.query.logString; var ip = eval(parsedUrl.query.logString)[0]; if (inspectingUrl == "" || inspectingUrl == ip) { clients.forEach(function(client) { client.write(logString); }); } } catch(err) { console.log("500 to " + remoteAdrress); res.writeHead(500, { 'Content-Type': 'text/plain' }); res.end('System Error\n'); } } else { console.log("401 to " + remoteAdrress); res.writeHead(401, { 'Content-Type': 'text/plain' }); res.end('Not Authorized\n'); } }).listen(httpConf.port, httpConf.host); and the web socket server: var inspectingUrl = undefined; ws.createServer(function(websocket) { websocket.on('connect', function(resource) { var parsedUrl = url.parse(resource, true); inspectingUrl = parsedUrl.query.ip; clients.push(websocket); }); websocket.on('close', function() { var pos = clients.indexOf(websocket); if (pos >= 0) { clients.splice(pos, 1); } }); }).listen(wsConf.port, wsConf.host); If you want to know more about node.js and see more examples, have a look to the great site: http://nodetuts.com/. In this site Pedro Teixeira will show examples and node.js tutorials. In fact my node.js http + websoket server is a mix of two tutorials from this site. The web client. The web client is a simple websockets application. We will handle the websockets connection, reconnect if it dies and a bit more. I’s based on node.js chat demo Real time monitor Socket status: Conecting ... IP: [all]" ?> count: 0 And the javascript magic var timeout = 5000; var wsServer = '192.168.2.2:8880'; var unread = 0; var focus = false; var count = 0; function updateCount() { count++; $("#count").text(count); } function cleanString(string) { return string.replace(/&/g,"&").replace(//g,">"); } function updateUptime () { var now = new Date(); $("#uptime").text(now.toRelativeTime()); } function updateTitle(){ if (unread) { document.title = "(" + unread.toString() + ") Real time " + selectedIp + " monitor"; } else { document.title = "Real time " + selectedIp + " monitor"; } } function pad(n) { return ("0" + n).slice(-2); } function startWs(ip) { try { ws = new WebSocket("ws://" + wsServer + "?ip=" + ip); $('#toolbar').css('background', '#65A33F'); $('#socketStatus').html('Connected to ' + wsServer); //console.log("startWs:" + ip); //listen for browser events so we know to update the document title $(window).bind("blur", function() { focus = false; updateTitle(); }); $(window).bind("focus", function() { focus = true; unread = 0; updateTitle(); }); } catch (err) { //console.log(err); setTimeout(startWs, timeout); } ws.onmessage = function(event) { unread++; updateTitle(); var now = new Date(); var hh = pad(now.getHours()); var mm = pad(now.getMinutes()); var ss = pad(now.getSeconds()); var timeMark = '[' + hh + ':' + mm + ':' + ss + '] '; logString = eval(event.data); var host = logString[0]; var line = "" + timeMark + "" + host + ""; line += "" + logString[1]; + ""; if (logString[2]) { line += " " + logString[2] + ""; } $('#log').append(line); updateCount(); window.scrollBy(0, 100000000000000000); }; ws.onclose = function(){ //console.log("ws.onclose"); $('#toolbar').css('background', '#933'); $('#socketStatus').html('Disconected'); setTimeout(function() {startWs(selectedIp)}, timeout); } } $(document).ready(function() { startWs(selectedIp); }); The server part: The server part will handle silently all PHP warnings and errors and it will send them to the node server. The idea is to place a minimal PHP line of code at the beginning of the application that we want to monitor. Imagine the following piece of PHP code $a = $var[1]; $a = 1/0; class Dummy { static function err() { throw new Exception("error"); } } Dummy1::err(); it will throw: A notice: Undefined variable: var A warning: Division by zero An Uncaught exception ‘Exception’ with message ‘error’ So we will add our small library to catch those errors and send them to the node server include('client/NodeLog.php'); NodeLog::init('192.168.2.2'); $a = $var[1]; $a = 1/0; class Dummy { static function err() { throw new Exception("error"); } } Dummy1::err(); The script will work in the same way than the fist version but if we start our node.js server in a console: $ node server.js HTTP server started at 192.168.2.2::5672 Web Socket server started at 192.168.2.2::8880 We will see those errors/warnings in real-time when we start our browser Here we can see a small screencast with the working application: This is the server side library: class NodeLog { const NODE_DEF_HOST = '127.0.0.1'; const NODE_DEF_PORT = 5672; private $_host; private $_port; /** * @param String $host * @param Integer $port * @return NodeLog */ static function connect($host = null, $port = null) { return new self(is_null($host) ? self::$_defHost : $host, is_null($port) ? self::$_defPort : $port); } function __construct($host, $port) { $this->_host = $host; $this->_port = $port; } /** * @param String $log * @return Array array($status, $response) */ public function log($log) { list($status, $response) = $this->send(json_encode($log)); return array($status, $response); } private function send($log) { $url = "http://{$this->_host}:{$this->_port}?logString=" . urlencode($log); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_NOBODY, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $response = curl_exec($ch); $status = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); return array($status, $response); } static function getip() { $realip = '0.0.0.0'; if ($_SERVER) { if ( isset($_SERVER['HTTP_X_FORWARDED_FOR']) && $_SERVER['HTTP_X_FORWARDED_FOR'] ) { $realip = $_SERVER["HTTP_X_FORWARDED_FOR"]; } elseif ( isset($_SERVER['HTTP_CLIENT_IP']) && $_SERVER["HTTP_CLIENT_IP"] ) { $realip = $_SERVER["HTTP_CLIENT_IP"]; } else { $realip = $_SERVER["REMOTE_ADDR"]; } } else { if ( getenv('HTTP_X_FORWARDED_FOR') ) { $realip = getenv('HTTP_X_FORWARDED_FOR'); } elseif ( getenv('HTTP_CLIENT_IP') ) { $realip = getenv('HTTP_CLIENT_IP'); } else { $realip = getenv('REMOTE_ADDR'); } } return $realip; } public static function getErrorName($err) { $errors = array( E_ERROR => 'ERROR', E_RECOVERABLE_ERROR => 'RECOVERABLE_ERROR', E_WARNING => 'WARNING', E_PARSE => 'PARSE', E_NOTICE => 'NOTICE', E_STRICT => 'STRICT', E_DEPRECATED => 'DEPRECATED', E_CORE_ERROR => 'CORE_ERROR', E_CORE_WARNING => 'CORE_WARNING', E_COMPILE_ERROR => 'COMPILE_ERROR', E_COMPILE_WARNING => 'COMPILE_WARNING', E_USER_ERROR => 'USER_ERROR', E_USER_WARNING => 'USER_WARNING', E_USER_NOTICE => 'USER_NOTICE', E_USER_DEPRECATED => 'USER_DEPRECATED', ); return $errors[$err]; } private static function set_error_handler($nodeHost, $nodePort) { set_error_handler(function ($errno, $errstr, $errfile, $errline) use($nodeHost, $nodePort) { $err = NodeLog::getErrorName($errno); /* if (!(error_reporting() & $errno)) { // This error code is not included in error_reporting return; } */ $log = array( NodeLog::getip(), "{$err} {$errfile}:{$errline}", nl2br($errstr) ); NodeLog::connect($nodeHost, $nodePort)->log($log); return false; }); } private static function register_exceptionHandler($nodeHost, $nodePort) { set_exception_handler(function($exception) use($nodeHost, $nodePort) { $exceptionName = get_class($exception); $message = $exception->getMessage(); $file = $exception->getFile(); $line = $exception->getLine(); $trace = $exception->getTraceAsString(); $msg = count($trace) > 0 ? "Stack trace:\n{$trace}" : null; $log = array( NodeLog::getip(), nl2br("Uncaught exception '{$exceptionName}' with message '{$message}' in {$file}:{$line}"), nl2br($msg) ); NodeLog::connect($nodeHost, $nodePort)->log($log); return false; }); } private static function register_shutdown_function($nodeHost, $nodePort) { register_shutdown_function(function() use($nodeHost, $nodePort) { $error = error_get_last(); if ($error['type'] == E_ERROR) { $err = NodeLog::getErrorName($error['type']); $log = array( NodeLog::getip(), "{$err} {$error['file']}:{$error['line']}", nl2br($error['message']) ); NodeLog::connect($nodeHost, $nodePort)->log($log); } echo NodeLog::connect($nodeHost, $nodePort)->end(); }); } private static $_defHost = self::NODE_DEF_HOST; private static $_defPort = self::NODE_DEF_PORT; /** * @param String $host * @param Integer $port * @return NodeLog */ public static function init($host = self::NODE_DEF_HOST, $port = self::NODE_DEF_PORT) { self::$_defHost = $host; self::$_defPort = $port; self::register_exceptionHandler($host, $port); self::set_error_handler($host, $port); self::register_shutdown_function($host, $port); $node = self::connect($host, $port); $node->start(); return $node; } private static $time; private static $mem; public function start() { self::$time = microtime(TRUE); self::$mem = memory_get_usage(); $log = array(NodeLog::getip(), "Start >>>> {$_SERVER['REQUEST_URI']}"); $this->log($log); } public function end() { $mem = (memory_get_usage() - self::$mem) / (1024 * 1024); $time = microtime(TRUE) - self::$time; $log = array(NodeLog::getip(), "End <<<< mem: {$mem} time {$time}"); $this->log($log); } } And of course the full code on gitHub: RealTimeMonitor
May 15, 2011
· 28,705 Views

Comments

Sooo, what about Google Android and phoneME?

Nov 15, 2011 · Mr B Loid

Interesting. Now: Selenium2 or Zombie.js. I'm playing with Zombie.js and it's really good. I need to have a look to new the new version of Selenium too.
Performance analysis of Stored Procedures with PDO and PHP

May 03, 2011 · Gonzalo Ayuso

I know is easy to turn your app into a mess if you have mix code in stored procedures and outside them. But in my opinion it's (or maybe it can be) a good practice (It's a debatable opinion). Of course we need to balance. Maintainability problems can be present in all places. SP maybe can help us to mess up our architecture but we can handle it if we want. Triggers are evil but sp not ;)
Ajax or DHTML & JavaScript hell

Apr 05, 2011 · Thomas Hansen

When I need to create my custom annotations I like to use addendum. A great library, easy to use and powerfull. But in this case I only needed to parse the type of "standard" PHPdoc string, not a new custom annotations. Anyway I will have a look to doctrine2 internals. I also need to have a look to another recomendation: Docblox
Ajax or DHTML & JavaScript hell

Apr 05, 2011 · Thomas Hansen

When I need to create my custom annotations I like to use addendum. A great library, easy to use and powerfull. But in this case I only needed to parse the type of "standard" PHPdoc string, not a new custom annotations. Anyway I will have a look to doctrine2 internals. I also need to have a look to another recomendation: Docblox
Ajax or DHTML & JavaScript hell

Apr 05, 2011 · Thomas Hansen

When I need to create my custom annotations I like to use addendum. A great library, easy to use and powerfull. But in this case I only needed to parse the type of "standard" PHPdoc string, not a new custom annotations. Anyway I will have a look to doctrine2 internals. I also need to have a look to another recomendation: Docblox
SOA Lifecycle All-in-One Guide

Jan 20, 2011 · Michael Meehan

Acording to "Keeping objects in memory between requests" I wrote a post with a crazy experiment about something like that. I create database connection pooling with a gearman worker. It works but I don't think this's suitable for production at least current version (not sure about the performance). The idea is instead of use a PDO object use something similar, with the same interface, but speaking to the gearman worker (where resides the real PDO object). Whith this technique we can maintain one object instance in memory between requests without serialization (PDO objects for example aren't serializable). What do you think about it?

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: