miércoles, 29 de diciembre de 2010

Skinput, becoming your arm into a touchscreen



I've received at home a copy of ACM XRDS Magazine, and wow!!!, it's really impressive one issue treated there. The article name is "Interfaces on the Go".

As I said, I got shocked 'cause it's something I never expected, Microsoft and Canergie Mellon's HCI Institute have been hard working in this research. 
I see that Microsoft is more than just Windows OS'es and Office software.

There's one new concept that it's being introduced, micro-interactions that refers to interactions that take a lower time to initiate and complete (tipically lesser than 4 seconds, this time it's to compare with interaction with cellphones or similar devices interaction), so that the user can quickly return to the task at hand.
There are a couple of techniques that are being developed:
  1. Muscle-computer interfaces.
  2. Bio-acustic sensing.
Most impressive technique to me is Bio-acustic sensing 'cause it's a technique that allows the skin (yes!!, the skin) to be used as a finger input surface, that's why it's name is Skinput. Regardless of it it seems that it's better to understand an image rather than a speech.

Please forgive me if I have misspellings, I'm just trying to improve my English so I decided start posting in this language.

Original source: XRDS Magazine, summer 2010 issue 16.4

lunes, 20 de diciembre de 2010

Packetfence, blocking unwanted traffic in the LAN

Now that I'm "fan" of linux.com via facebook, I saw this little tuto regarding to "Block Traffic", I just decided to post it 'cause already I'm Network Manager and I might use this information in not so far future. Well here comes post.


Packetfence is a very powerful Network Access Control tool. Using Packetfence you can control and block unwanted traffic on your network. Want to block P2P services like BitTorrent, or keep mobile devices like iPhones and Android phones off your wireless network? Packetfence gives you the kind of fine-grained control you're looking for.

Packetfence is officially supported on Red Hat Enterprise Linux (RHEL) and CentOS. With those two distributions you can quickly get Packetfence up and running (Unlike on Ubuntu which I recently outlined in "Install and Configure Packetfence on Ubuntu Linux"). But you are not relegated to command line only (as you will find in Ubuntu). With Red Hat or CentOS you will find a powerful web-based tool at your fingertips. With this tool you can easily manage Packetfence. But not all aspects of Packetfence can be handled from the web-based GUI.
 
Assumptions

What I want to demonstrate is how to block specific traffic on your Packetfence-enabled network. I will assume just a few items:
You already have Packetfence installed and working properly (I will be demonstrating on CentOS 5)
You have administrative rights to the machine Packetfence is installed on.

That's all. I am going to demonstrate how to block two types of traffic. First I am going to demonstrate how to block P2P traffic (such as Limewire) which will be followed by how to block iPhone/Android phone access to your network.
 
Adding the Final Piece: Snort

In order for Packetfence to block specific services or devices you have to enlist the help of Snort. Snort is a network intrusion detection system. In order to install Snort, follow these steps:
 
  1. Open up a terminal window.
  2.  su to the root user or use sudo.
  3.  Issue the command yum install snort
With Snort installed you are almost ready. However, you will need to get rules so that Snort knows what is an intrusion. By default Snort installs without any rules. In order to add rules you have two options:
 
  • Write your own rules.
  •  Download and install pre-configured rules from the Snort Website 
I highly recommend you opt for the latter (as writing your own rules will take a lot of time and effort). To do this you will need to register on the Snort web site. You can sign up for the free account and still download rules. Once you have signed up and activated your account, download the rules and then follow these steps:
 
  1. Open up a terminal window.
  2. Change to the directory the snortrules-snapshot-XXX.tar.gz file was downloaded to (Where XXX is the release number that matches the Snort release installed on your machine.)
  3. Issue the command tar xvzf snortrules-snapshot-XXX.tar.gz (Where XXX is the release number).
  4. Change into the newly created rules folder.
  5. Issue the command cp * /etc/snort/rules/
You now have all the rules you need for Snort to work. Start up Snort with the command /etc/rc.d/init.d/snortd start. You should now see /var/log/messages starting to fill up with information from Snort. Now it's time to re-configure Packetfence.
 
Enable Snort

Since you just added Snort, you need to make Packetfence aware. To do this open up the /usr/local/pf/conf/pf.conf file and add the following:

[services]

snort=/usr/sbin/snort


Save the file and restart Packetfence with the command /usr/local/pf/bin/pfcmd service pf restart — Packetfence is now using Snort.
 
Choosing the Correct Template

Before we can get into the actual configuration and blocking of services/devices, we first have to re-configure Packetfence to run in a mode other than testing. In the first article I illustrated how to configure and start Packetfence in testing mode. This is great for making sure things are working as Packetfence will only log events (not act upon them). In order to get Packetfence to actually act upon a violation, you have to reconfigure it to run using a different template. The templates you can choose from are:
 
  • Test mode
  • Registration
  • Detection
  • Registration & Detection
  • Registration, Detection & Scanning
  • Session-based Authentication

     
The template you want to choose is Registration, Detection & Scanning. In order to do that open up a terminal window and do the following:
 
  • su to the root user.
  • Change to the /usr/local/pf directory.
  • Issue the command ./configurator.pl .
  • Select option [5] for Registration, Detection & Scanning.
  • Answer all of the questions (this will be similar to your initial installation, as shown in the first article).
  • Now cd into the /usr/local/pf/bin directory.
  • Issue the command ./pfcmd service pf restart.
     
Packetfence is now working in the proper mode to act against violations. However, it doesn't know what is a violation. For that we have to turn to the /usr/local/pf/violations.conf file.
 
Enabling Specific Violations

In the violations.conf file you will see a long laundry list of violations. Each violation section looks like:

[2000334]
desc=P2P (BitTorrent)
priority=8
url=/content/index.php?template=p2p
disable=Y
max_enable=1
trigger=Detect::2000334,Detect::2000357,Detect::2000369


The above violation is for BitTorrent connections. As you can see this violation, in its default state, is disabled. To enable this violation all you need to do is change the line:

disable=Y

to

disable=N

You will find, listed in the violations, the P2P violation and the Android device violation. Enable both of those, save the file, and restart Packetfence. Now, any device that violates the enabled violations will be denied access and will be logged.
 
Web Interface



As I mentioned, Packetfence does come with a spiffy Web interface that allows you to manage your Packetfence-protected network. To access this tool open up your browser and point it to https://ADDRESS_TO_SERVER:1443. When you arrive at this site you will have to log in with your admin credentials (configured during installation of Packetfence). Upon successful authentication you will find yourself at the Packetfence web interface (see Figure 1). Here you can manager each node on your network, add users (for authentication), start/stop various pieces of Packetfence, and configure Packetfence.

From the Violation tab you can even enable/disable violations using a simple drop-down to select the particular violation you want to enable.
 
Final Thoughts


As far as Network Access Control goes, you will be hard-pressed to find a more powerful tool than Packetfence. Not only is it powerful, but once installed and configured it is easy to administer and manage. Of course, there is so much more that can be done with Packetfence. For more information read through the outstanding guides offered on the Packetfence Documentation page.

Original source can be found at linux.com

miércoles, 15 de diciembre de 2010

Recuperación de Datos



There are many times that pople usually ask us if we can bring back their data from a hard disk (or any storage device), 'cause they "accidentally" deleted it. Uf!!! that really piss me off.

Surfing I've found a pair of free and open software that might help us to solve that:

  • PhotoRec is file data recovery software designed to recover lost files including video, documents and archives from hard disks, CD-ROMs, and lost pictures (thus the Photo Recovery name) from digital camera memory. PhotoRec ignores the file system and goes after the underlying data, so it will still work even if your media's file system has been severely damaged or reformatted. 
  • TestDisk is powerful free data recovery software! It was primarily designed to help recover lost partitions and/or make non-booting disks bootable again when these symptoms are caused by faulty software, certain types of viruses or human error (such as accidentally deleting a Partition Table). Partition table recovery using TestDisk is really easy. 
Both software can run trough a variety of operating systems, this is the most important issue because there are many programs that only run under Windows platforms consequently FAT and NTFS are only supported file systems, this is something that you don't have to experiment because supported file systems in Photorec and/or TestDisk are:
  • BeFS ( BeOS ) 
  • BSD disklabel ( FreeBSD/OpenBSD/NetBSD ) 
  • CramFS, Compressed File System 
  • DOS/Windows FAT12, FAT16 and FAT32 
  • Windows exFAT 
  • HFS, HFS+ and HFSX, Hierarchical File System 
  • JFS, IBM's Journaled File System 
  • Linux ext2, ext3 and ext4 
  • Linux LUKS encrypted partition 
  • Linux RAID md 0.9/1.0/1.1/1.2
    -RAID 1: mirroring
    -RAID 4: striped array with parity device
    -RAID 5: striped array with distributed parity information
    -RAID 6: striped array with distributed dual redundancy information
  • Linux Swap (versions 1 and 2) 
  • LVM and LVM2, Linux Logical Volume Manager 
  • Mac partition map 
  • Novell Storage Services NSS 
  • NTFS ( Windows NT/2000/XP/2003/Vista/2008/7 ) 
  • ReiserFS 3.5, 3.6 and 4 
  • Sun Solaris i386 disklabel 
  • Unix File System UFS and UFS2 (Sun/BSD/...) 
  • XFS, SGI's Journaled File System
     
Need more details?? 
You can download for free here.

jueves, 9 de diciembre de 2010

Referencia de IPTABLES



A petición del compañero spark pues les dejo una referencia de IPTABLES, la pueden descargar de aquí.

Ojo, no es la que yo usé cuando aprendí, sin embargo es muy buena, ya la revisé. 

lunes, 6 de diciembre de 2010

Curso Avanzado de Redes con Tecnología Cisco



Como es acostumbrado aquí dejo el material del curso avanzado de redes con tecnología cisco impartido por mi en las instalaciones de cluster.

Si alguien tiene algún comentario por favor postearlo.

Si alguien más desea descargar el material, sépase que no tiene las prácticas, únicamente tiene un caso de integración que deberá resolverse al final de este módulo.

El material puede ser descargado de aquí.

domingo, 28 de noviembre de 2010

Linux Ebooks


Hi folks, in my inbox I have a new email from tips-linux.net team, sincerely it's kind of weird 'cause the whole email body says "You might find useful these books", so the list shown below is being displayed:


Finally a big acknowledge to our friends from tips-linux.net, today there's no original source 'cause as I said, I've received it into my inbox.

I hope somebody could find this helpful.

jueves, 11 de noviembre de 2010

Material del Curso de Redes con Tecnología Cisco


Hola a todos, aquí les dejo los enlaces para que puedan descargar las diapositivas de lo que hasta el momento hemos visto en el curso de redes.
De momento sólo subiré las diapositivas, las prácticas se las pasaré en horas del curso. 

Material de descarga aquí

jueves, 14 de octubre de 2010

Neuroimaging research in Debian

OMG, surfing the web I found this interesting article, I didn't know the fact that debian had a branch dedicated to develop dedicated to neuroimaging, it's really surprising...
Debian 6.0 “squeeze” will be the first GNU/Linux distribution release ever to offer comprehensive support for magnetic resonance imaging (MRI) based neuroimaging research. It comes with up-to-date software for structural image analysis (e.g. ants), diffusion imaging and tractography (e.g. mrtrix), stimulus delivery (e.g. psychopy), MRI sequence development (e.g. odin), as well as a number of versatile data processing and analysis suites (e.g. nipype). Moreover, this release will have built-in support for all major neuroimaging data formats.
Please see the Debian Science and Debian Med task pages for a comprehensive list of included software and the NeuroDebian webpage for further information.
NeuroDebian at the Society for Neuroscience meeting 2010
The NeuroDebian team will run a Debian booth at the Society for Neuroscience meeting (SfN2010) that will take place November 13-17 in San Diego, USA. The annual meeting of the Society for Neuroscience is one of the largest neuroscience conferences in the world, with over 30,000 attendees. Researchers, clinicians, and leading experts discuss the latest findings about the brain, nervous system, and related disorders.

viernes, 1 de octubre de 2010

Nagios/Centreon: Tutorials and Documentation

Aquí dejo una página que contiene una excelente documentación sobre los sistemas de monitoreo que yo conozco y he usado: Nagios, Centreon y Cacti.
Hago la aclaración que esta documentación está en francés y de momento no voy a traducirla, si alguien ocupa ayuda con la traducción de alguno postee un comentario y veremos la forma de ayudarlo a traducirlo.


Well for those whose don't speak spanish nor french, I share this link to Centreon, Nagios and Cacti documentation, I already took a review about it and I can tell you that is really a good documentation. By the way I also tell you that I'm not planning to translate this docs to Spanish nor English, because I'm already working but if anyone needs help to start translating just post it and I'll try to find a way to help him.
 

You can find a couple of links here:
http://blog.nicolargo.com/nagios-tutoriels-et-documentations
http://www.regis-senet.fr/publications/systeme-exploitation/supervision_reseau.rar 

viernes, 24 de septiembre de 2010

Bandwidth monitoring with iptables


Most of the time we use iptables to set up a firewall on a machine, but iptables also provides packet and byte counters. Every time an iptables rule is matched by incoming or outgoing data streams, the software tracks the number of packets and the amount of data that passes through the rules.

It is easy to make use of this feature and create a number of "pass-through rules" in the firewall. These rules do not block or reroute any data, but rather keep track of the amount of data passing through the machine. By using this feature, we can build a simple, effective bandwidth monitoring system that does not require additional software.

Depending on how the firewall rules are set up, the setup for bandwidth monitoring may be very simple or very complex. For a desktop computer, you may need to create only two rules to log the total input and output. A system acting as a router could be set up with additional rules to show the totals for one or more subnets, right down to the individual IP address within each subnet. In addition to knowing exactly how much bandwidth each host and subnet on the network is using, this system could be used for billing or chargeback purposes as well.

Rules setup

The rules setup itself is quick and straightforward, and takes only a few minutes. Obviously, you need to be root or use sudo to insert iptables rules.

The examples in this article are based on a router that provides Internet service to various towns. The iptables rules keep track of how much bandwidth each town uses and how much bandwidth each customer in that town uses. At the end of each month, an administrator checks the counters. Individuals who use more than they were supposed to get billed for over usage, the counters are reset to zero, and the process is repeated at the beginning of the next month.

The IP addresses in this article are modified from the real addresses. We'll use the private IP space 192.168.0.0/16, subnetted into smaller blocks.

First, we will create two custom chains for the two towns and put town-specific rules in them. This will keep the built-in FORWARD chain relatively clean and easy to read. In this example, the FORWARD chain will only provide the global counters (all customers combined on a per-town basis).

iptables -N town-a
iptables -N town-b

The next data element is the total bandwidth counter. Because this machine is a router only, the INPUT and OUTPUT chains are of little interest. This machine will not be generating a significant amount of bandwidth (i.e., it is not serving as a mail or Web server), nor will it be receiving significant uploads from other hosts.

Total bandwidth downloaded by and uploaded to the two towns combined:

iptables -A FORWARD

This is the easiest of rules. The rule will match any source and any destination. Everything that is being passed through this router matches this rule and will provide the total of combined downloaded and uploaded data.

We also want to see how much each town downloads and uploads separately:

# Town A Downloads
iptables -A FORWARD -d 192.168.1.0/26 -j town-a

# Town A Uploads
iptables -A FORWARD -s 192.168.1.0/26 -j town-a

# Town B Downloads
iptables -A FORWARD -d 192.168.1.64/27 -j town-b

# Town B Uploads
iptables -A FORWARD -s 192.168.1.64/27 -j town-b

The use of source and destination in the above rules may be a source of confusion. Destinations are often equated with uploads, and sources are downloads. This would be true whether the data was destined for the router or originated from the router itself.

In this application, however, we reverse the perspective. This router is forwarding (uploading) data to a destination, but from a customer perspective, data is being received. In other words, the customer is downloading that data. When dealing with customers, the terminology is data they downloaded, not what the router uploaded to them. This is why in the FORWARD chain, the terms destination and source typically have reversed meanings.

The rules created above give us separate totals for all downloads to and uploads from each individual town. This is accomplished by matching the source and destination of all traffic through the router for a town's specific subnet. After a rule is matched, the -j option invokes a jump to one of the custom chains. These custom chains can then be used to add additional rules pertaining to the subnet. For instance, rules can be created for each individual IP address in that subnet to track bandwidth on a per-host basis:

# Town A, Host 192.168.1.10 Download
iptables -A town-a -d 192.168.1.10

# Town A, Host 192.168.1.10 Upload
iptables -A town-a -s 192.168.1.10

You could repeat this process for every IP address for all towns within the subnet.

Bandwidth statistics

Viewing the current bandwidth usage is a matter of running iptables with the -L and -v options. The -L outputs the statistics for a chain (or all chains if none is provided). The -v option provides verbose output, including the packet and byte counters that we are interested in. I recommend using the -n option as well to prevent DNS lookups, meaning iptables will show the IP addresses without attempting to resolve the hostnames for the IP addresses, which would put additional and unnecessary load on the router.

The output below is modified from the full output for brevity:


root@raptor:~# iptables -L -v -n

Chain FORWARD (policy ACCEPT 7936M packets, 3647G bytes)bytes target source destination
338G 0.0.0.0/0 0.0.0.0/0
104G town-a 0.0.0.0/0 192.168.1.0/26
40G town-a 192.168.1.0/26 0.0.0.0/0
20G town-b 0.0.0.0/0 192.168.1.64/27
12G town-b 192.168.1.64/27 0.0.0.0/0


This snippet shows that towns A and B combined have downloaded and uploaded a total of 338GB. Town A is responsible for 104GB downloaded and 40GB uploaded. In the first line of output of the chain itself is a "more" total number -- 3,647GB. This is the total amount of data routed through since the last time this router was restarted, or more accurately, since the last time the iptables modules were inserted into the kernel.

When a chain is "zeroed" (resetting all counters in a chain to zero) with the -Z option, this number is not reset. For this reason, I recommend creating a real total rule to make it easier to reset the total counter. It then takes one command to reset the counters, and you do not need to remove modules, restart the server, or work with the iptables-save and iptables-restore commands to reset the counter.

Scrolling further down the output shows the individual IP addresses. Example for Town A:


Chain town-a (2 references)
bytes source destination
32G 0.0.0.0/0 192.168.1.10
282M 192.168.1.10 0.0.0.0/0
1521M 0.0.0.0/0 192.168.1.11
656M 192.168.1.11 0.0.0.0/0


This output further breaks down the total bandwidth of Town A down to the individual customers.

The "2 references" shown in the iptables output refer to the two rules in the FORWARD chain that jump to this chain.

Saving data across reboots

If you reboot the machine or remove the iptables kernel modules, you'll lose all of your packet and byte counters. if these counters are to be used for billing purposes, you will want to make backups of the running counters, and in the event of a reboot, restore the counters rather than starting from zero.

The iptables package comes with two programs that aid in this: iptables-save and iptables-restore. Both programs need to be told to explicitly use the packet and byte counters during backup and restore using the -c command line option.

The backup and restore process is fairly straightforward. To back up your iptables data, use iptables-save -c > iptables-backup.txt. To restore the data, after reboot, use iptables-restore -c < iptables-backup.txt.

Conclusion

Iptables provides a quick and easy way to track bandwidth usage without having to install additional software. You have, and probably already use, the tools needed to accomplish this monitoring.

The flexibility and power of iptables allows for more complex monitoring scenarios. You can create rules to not only track different subnets, but also to track specific ports and protocols, which lets you track exactly how much of each customer's traffic is Web, email, file sharing, etc.

In addition, these bandwidth monitoring rules can also become blocking rules. If a host has used too much bandwidth, its rule in a town's specific chain can be modified by adding -j DROP to both the download and upload rules. This effectively stops traffic being routed to and from that host.


You can see original source here.

jueves, 23 de septiembre de 2010

CONCURSO ESTATAL DE PROGRAMACIÓN El ITSZ



Bien pues aquí dejo la noticia de cuando el equipo Tuxinator (Miguel Angel, Jonathan y su servidor Armando) fuimos ganadores del primer concurso estatal de programación ACM hace casi un año, donde por cierto, la escuela que representamos nunca nos dió el reconocimiento público que merecíamos por haber puesto en alto el nombre de nuestra institución.

ZAMORA, MICH.- El ITESZ fue sede del primer Concurso Estatal de Programación. En la imagen se muestra a los ganadores del primer lugar, estudiantes del Tecnológico de Morelia.



ZAMORA, MICH.- Con éxito se llevó a cabo en Zamora el Concurso Estatal de Programación de la Association for Computing Machinery (ACM), realizado en el Instituto Tecnológico de Estudios Superiores de Zamora (ITESZ), y del cual es sede estatal.

«La actualización y la superación académica basada en las nuevas técnicas del conocimiento para el alumno, son los objetivos fundamentales que se buscan con la realización de justas académicas, como este Concurso Estatal de Programación», señaló Jesús Chávez Anaya, director académico del ITESZ, al llevar a cabo la ceremonia de entrega de premios a los tres primeros lugares de este concurso, en el que participaron más de 100 estudiantes de 11 institutos tecnológicos, entre federales y descentralizados que se encuentran en el estado de Michoacán.

La primera posición fue para los estudiantes de sistemas computacionales del Instituto Tecnológico de Morelia Armando de Jesús Montoya Hernández, Jonathan Israel Fernández Abarca y Miguel Angel Alcalá Ordaz, quienes se hicieron acreedores a 5 mil pesos y su derecho de pase al regional.

El segundo lugar fue obtenido por estudiantes del Instituto Tecnológico Superior de Los Reyes, jóvenes que se hicieron acreedores a 3 mil pesos; ellos son Justo Diego Ulises David, Arturo Emanuel García Contreras y Cristóbal Sánchez Ceja. El tercer lugar fue para los alumnos del ITESZ, José David Jacobo Guillén, Daniel Eduardo Madrigal Díaz y Gustavo Armando O´Henrry, quienes se adjudicaron una memoria de 16 gigabytes cada uno.

El evento se realizó en el aula de usos múltiples de la institución. Ahí estuvieron presentes Amauri López Calderón, director de Planeación del ITESZ, así como el director académico, Jesús Chávez Anaya. Participaron también Celia Villanueva González, regidora de Educación, Cultura y Capacidades Especiales del Ayuntamiento de Zamora; José Luis Manzo Bautista, coordinador de las carreras de Licenciatura en Informática e Ingeniería en Sistemas Computacionales, también Agustín Rosas Nava, subdirector administrativo del Tecnológico y como juez y representante de ACM en México, Alberto la Madrid Alvarez.

Al hablar a nombre del director general del ITESZ, Jorge Delgado Contreras, el director académico, Jesús Chávez, resaltó que con este concurso estatal se sienta un precedente en la vida académica de Michoacán, pues es la primera ocasión en la que en el estado se realizada un evento con estas características.

Destacó que el mismo concurso es propicio para generar un intercambio de conocimientos entre los estudiantes de los institutos tecnológicos de Michoacán, así como generar una sana competencia que en base a la misma, se pueda medir el desempeño y el nivel de conocimientos que tienen los escolapios del ITESZ.

El ITESZ buscará para el próximo año ser la sede regional de este evento, mismo que contemplará la participación de alumnos de institutos tecnológicos y universidades de al menos 8 estados de la República mexicana.

Finalmente el representante de ACM, una filial de IBM México, exhortó a los estudiantes a seguir participando en eventos como este, el que tiene entre sus fines la creación de software y programas que sean útiles al desempeño y desarrollo de los sectores productivos.

Los estudiantes ganadores, participarán para finales de este mes en el concurso regional en la ciudad de Querétaro, para luego pasar al nacional y quienes resulten triunfadores, tendrán el pase al mundial que será en China para finales de año.
 
La fuente original está aquí.

lunes, 20 de septiembre de 2010

Oracle MySQL rival PostgreSQL updated

While Oracle trumpets its open source MySQL database management system this week at the company's OpenWorld conference, the creators behind MySQL's rival, PostgreSQL, have released a major new version of their rival database software.

The newly released version 9 of PostgreSQL includes a number of new features that are potentially appealing to enterprise users. It includes the ability to do streaming replication, the upgrade process has been made considerably easier, and for the first time, it can run natively on clients running the 64-bit version of Microsoft Windows.

For this release, the developers applied "the mainstream polish on the database, and not [have] it just be something for open-source people," said Bruce Momjian, a core developer to the open-source project, in a previous interview with the IDG News Service.

"We're now focusing on ease of use, ease of administration, and providing the type of facilities that we think large organizations need," he said.

In conjunction with this release, EnterpriseDB, which offers enterprise support and related software for PostgreSQL, has updated its Postgres Plus line of products to support PostgeSQL version 9.


Original Source can be found here.

jueves, 9 de septiembre de 2010

Scientists develop device to enable improved global data transmission


Researchers have developed a new data transmission system that could substantially improve the transmission capacity and energy efficiency of the world’s optical communication networks.

Transmission of data through optical networks is currently limited by ‘phase noise’ from optical amplifiers and ‘cross talk’ induced by interaction of the signal with the many other signals (each at a different wavelength) simultaneously circulating through the network. ‘Phase noise’ is the rapid, short-term, random fluctuations in the phase of a signal, which affects the quality of the information sent and results in data transmission errors. ‘Cross talk’ refers to any signal unintentionally affecting another signal.

Now, researchers working on the EU-funded FP7 PHASORS project, led by the University of Southampton’s Optoelectronics Research Centre (ORC), have announced a major advance in the potential elimination of this interference.

Traditionally optical data has been sent as a sequence of bits that were coded in the amplitude of the light beam, a system that was simple and practical but inefficient in its use of bandwidth. Until recent years, this wasn’t a problem given the enormous data-carrying capacity of an optical fibre. However, the introduction of bandwidth-hungry video applications, such as YouTube, and the continued growth of the internet itself have led to increasing interest in finding more efficient data signalling formats – in particular, schemes that code data in the phase rather than amplitude of an optical beam.

In a paper published this week in the journal Nature Photonics, scientists on the PHASORS project announced the development of the first practical phase sensitive amplifier and phase regenerator for high-speed binary phase encoded signals. This device, unlike others developed in the past, eliminates the phase noise directly without the need for conversion to an electronic signal, which would inevitably slow the speeds achievable.

The device takes an incoming noisy data signal and restores its quality by reducing the build up of phase noise and also any amplitude noise at the same time.

ORC Deputy Director and PHASORS Director, Professor David Richardson comments: “This result is an important first step towards the practical implementation of all-optical signal processing of phase encoded signals, which are now being exploited commercially due to their improved data carrying capacity relative to conventional amplitude coding schemes.

“Our regenerator can clean noise from incoming data signals and should allow for systems of extended physical length and capacity. In order to achieve this result, a major goal of the PHASORS project, has required significant advances in both optical fibre and semiconductor laser technology across the consortium. We believe this device and associated component technology will have significant applications across a range of disciplines beyond telecommunications – including optical sensing, metrology, as well as many other basic test and measurement applications in science and engineering.”

The PHASORS project, which started in 2008, was tasked with developing new technology and components to substantially improve the transmission capacity and energy efficiency of today’s optical communication networks.

The project combines the world-leading expertise of research teams from the ORC, Chalmers University of Technology (Sweden), The Tyndall National Institute at University College Cork (Ireland), the National and Kapodestrian University of Athens (Greece), and leading industrial partners Onefive GmbH (Switzerland), Eblana Photonics (Ireland) and OFS (Denmark).


Original Source here.

martes, 7 de septiembre de 2010

Instalación de Centreon/Nagios


Bien, pues lo prometido es deuda, aquí dejo la un poco de la documentación de Centreon en español, hago la aclaración que no es toda, ya que es demasiado extensa para traducirla toda y la verdad el trabajo no me deja. Sin embargo si es suficiente para poder hacer funcionar un sistema de monitoreo basado en Centreon/Nagios.

http://es.scribd.com/doc/53331474/Instalacion-y-configuracion-de-Centreon-2-con-FAN

http://es.scribd.com/doc/54214705/Instalacion-y-configuracion-de-Centreon-2

En caso de no poder accederlo, por favor envíenme un email a: decibel.elektrobeat en gmail

miércoles, 25 de agosto de 2010

Taller de Linux


Hola a todos chicos, aquí les dejo un enlace para descargar el material visto en el curso de Administración Básica de Linux.


Después les subiré la demás bibliografía usada.

Saludos a todos...

domingo, 22 de agosto de 2010

Aviso: próxima publicación de documentación de Centreon

Para quienes consulten este blog, sépanse que en unos días más haré la publicación sobre la manera de configurar Centreón, ya que lo he terminado, entregado y me han calificado, y pues la verdad no es mi intención guardar ese conocimiento tan valioso para mi solo.

Espero algunos comentarios para si a alguien le urge que lo suba

miércoles, 4 de agosto de 2010

PandoraFMS



Hace un tiempo comenté que estaba usando Centreon, un front-end para Nagios, desgraciadamente la documentación para Centreon es escasa y limitada al idioma francés, pues bien, por azares del destino me he encontrado a uno de los competidores de Nagios, su nombre es PandoraFMS y también es Open Source. Este sistema sí tiene documentación disponible en el idioma de Cervantes. 

Pandora FMS es un software de Código Abierto que sirve para monitorizar y medir todo tipo de elementos. Monitoriza sistemas, aplicaciones o dispositivos. Permite saber el estado de cada elemento de un sistemas a lo largo del tiempo.

Pandora FMS puede detectar si una interfaz de red se ha caído, un ataque de "defacement" en una web, una pérdida de memoria en algun servidor de aplicaciones, o el movimiento de un valor del NASDAQ. Pandora FMS puede enviar SMS si un sistema falla o cuando las acciones de Google bajan de 500 dólares.


En lo personal, yo no lo he probado, pero de haberlo sabido antes, pues lo hubiera usado.

La página en español se encuentra aquí.

Ksplice, actulizar el kernel sin reiniciar


Queridos lectores, se acabaron esas tardes en la oficina a esperar que todo el mundo se valla para reiniciar una maquina, eso se acabo. La gente de ksplice ha pensado en esto y nos ofrece su software para poder tener la máxima disponibilidad.


Funciona congelando el sistema dejando solo un proceso abierto, entonces es cuando se actualiza el kernel, una vez actualizado se descongela el sistema con el kernel nuevo.

Si tenemos Ubuntu lo podemos conseguir gratis, si no debemos abonar cerca de 4 dolares al mes.


Fuente aquí

Keylogger para Linux, LKL


LKL, data del 2005 pero hace su trabajo y funciona. Para el que no sepa lo que haces, este programa, captura todos los movimientos que haces con el teclado a grandes rasgos. 
Para instalar:

#wget http://switch.dl.sourceforge.net/project/lkl/lkl-0.1.1/lkl-0.1.1/lkl-0.1.1.tar.gz
#tar xvf lkl-0.1.1.tar.gz
#cd lkl
#./configure
#make && sudo make install

Listo, para usar:

#lkl -l -k keymaps/us_km -o captura.txt


Fuente Original aquí.

Directorios de Apache protegidos por contraseña




 

Buscando en internet por alguna información de redes, me encontré con este artículo y decidí postearlo porque considero que es interesante.

La situación es la siguiente, tenemos la carpeta /var/www/hide/ que esta disponible para todos pero queremos que no. Así que empezamos:

El comando siguiente nos crea el archivo y un usuario, es como el comando adduser
#htpasswd -c /etc/httpd/passwd elektrobeat

Ahora unos arreglos en el fichero de configuración de apache, deberás añadir estas lineas:

<Directory “/var/www/hide/”>
AuthType Basic
AuthName “Directorio Restringido”
AuthUserFile /etc/httpd/passwd
Require user elektrobeat
</Directory>

Reiniciamos apache:
#/etc/init.d/httpd restart

Si queremos más usuarios:
#htpasswd -b /etc/httpd/passwd decibel

Si queremos que más de 2 usuarios vean la carpeta
/var/www/hide/ el fichero de apache será así:

<Directory “/var/www/hide/”>
AuthType Basic
AuthName “Directorio Restringido”
AuthUserFile /etc/httpd/passwd
Require user elektrobeat
Require user decibel
</Directory>

Cuando hagas cambios en el fichero de apache debes de reiniciar el servicio.


Fuente original aquí
 

lunes, 2 de agosto de 2010

OpenOffice 3.3 está de camino.


Afortunadamente, no todos los proyectos que heredó Oracle de Sun están en estado de animación suspendida y, en el caso de OpenOffice.org, nada más lejos de la realidad. El día 24 de julio se decretó el "feature freeze" para la versión 3.3 de la suite.

Esto significa que a partir de ahora, no se admiten nuevas modificaciones ni adiciones a las características ya establecidas de la versión, sólo parches y correcciones de errores.

Un "feature freeze" suele ser indicativo de que el lanzamiento de una nueva versión de un software es inminente (significando "inminente" en este contexto "cuestión de semanas" ), por tanto, es muy probable que veamos OpenOffice.org 3.3 antes del final del verano.

Seamos objetivos, puedo decir que OpenOffice es una suite de oficina gratuita y libre, ciertamente se presta para ser productivo, pero la verdad es que Office de Micro$$oft le lleva mucho la delantera en la versatilidad que presta en cuanto al manejo de contenidos visuales, yo personalmente no me acostumbro a usar openoffice.

Fuente: Linux Magazine aquí.

viernes, 30 de julio de 2010

Intel Makes Advance in Silicon-Based Lasers

New technology from Intel could lead to the development of computers that use light beams to move data. Intel says it has built a communications device using components from silicon, including lasers that operate at a very fast speed. Link, the prototype device, includes a transmitter chip with four silicon-based lasers that each send data at 12.5 billion bits per second, or 50 gigabits total. Some commercial networking hardware can send 40 gigabits of data per second, but the devices may cost hundreds of dollars or more per connection, says Intel's Justin Rattner. Intel believes it can reach prices as low as $1 per connection and achieve greater speeds--up to 1 trillion bits per second. However, the company must improve its techniques for producing the components in high volumes, says Intel's Mario Paniccia. Intel says the development could lead to new commercial products and change the way computers are designed.

Programming, Development Skills in Demand

For those whose refuse the idea that Java is being demanded.

Java/J2EE is the programming and developing skill in most demand with more than 14,000 open job positions nationally, according to a July report from IT job board Dice. The survey of recruiters and human resource professionals also found very high demand for C#, .Net, Oracle, Sharepoint, and SAP skills, as well as security analysts, people with federal security clearances, and database administrators. New York leads the way with more than 8,200 openings, followed by Washington, D.C., with 7,400, Silicon Valley with 4,400, and Chicago and Los Angeles with more than 2,800 each. Atlanta, Seattle, and Dallas have more than 2,000 IT job openings each, and Philadelphia has more than 1,600 openings. Meanwhile, Pace University analyzed government figures for its latest quarterly Pace/Skillproof IT Index, and found the indicator of employment activity in Manhattan's information technology (IT) industry has risen 46 percent, from 74 to 110. The index report also notes that job openings for IT management and network communication analysts have risen by more than 60 percent. Meanwhile, demand for database administrators and network administrators has increased about 15 percent in the second quarter.


Source here.

jueves, 29 de julio de 2010

Touchpad para Laptop con Xorg

Como ya había posteado anteriormente, desde la versión 13.0 de Slackware se hizo una actualización de Xorg-server, la cual ya no requiere un archivo de configuración xorg.conf
 
La configuración por defecto de HAL para los touchpads de laptop no es completamente satisfactoria. De hecho, es imposible hacer un click (en la superficie del touchpad) o desplazamiento (secuencia de elevadores verticales y horizontales pasando el dedo sobre el lado derecho o inferior). 

Afortunadamente, se puede editar y arreglar todo eso! 

El procedimiento es el siguiente:
 
Copia el fichero de configuración de synaptics por defecto  de HAL: 

mkdir -p /etc/hal/fdi/policy/10osvendor 

cp -p /usr/share/hal/fdi/policy/10osvendor/11-x11-synaptics.fdi  /etc/hal/fdi/policy/10osvendor

Entonces, sólo cambia el contenido mediante la adición de las siguientes opciones si no existen. Aquí, por ejemplo el contenido de un archivo de configuración que permite que el desplazamiento vertical y horizontal y click:

<?xml version="1.0" encoding="ISO-8859-1"?>
<deviceinfo version="0.2">
<device>
<match key="info.capabilities" contains="input.touchpad">
<merge key="input.x11_driver" type="string">synaptics</merge>
<merge key="input.x11_options.protocol" type="string">auto-dev</merge>
<merge key="input.x11_options.SHMConfig" type="string">true</merge>
<merge key="input.x11_options.VertEdgeScroll" type="string">true</merge>
<merge key="input.x11_options.HorizEdgeScroll" type="string">true</merge>
<merge key="input.x11_options.VertScrollDelta" type="int">100</merge>
<merge key="input.x11_options.UpDownScrolling" type="string">true</merge>
<merge key="input.x11_options.TapButton1" type="string">1</merge>
<merge key="input.x11_options.TapButton2" type="string">2</merge>
<merge key="input.x11_options.TapButton3" type="string">3</merge>
</match>
</device>
</deviceinfo>


Hago la aclaración que esto lo encontré en un blog en francés, sólo decidí postearlo porque creo que es muy útil, la fuente aquí.

martes, 27 de julio de 2010

RackSpace libera el código fuente de su cloud.

OpenStack, el software liberado hoy por RackSpace, permite a cualquier organización crear y ofrecer capacidades de cloud computing utilizando software de código abierto que se ejecuta en hardware estándar.

Los dos componentes del paquete son OpenStack Compute y OpenStack Storage. OpenStack Compute ayuda a crear y administrar grandes grupos de servidores virtuales privados y OpenStack Storage sirve para crear un almacenamiento de objetos escalable y redundante utilizandos clusters de servidores de consumo para almacenar terabytes e incluso petabytes de datos.

Fuente Original Linux Magazine aquí.

OpenGL 4.1 supera en prestaciones a Direct3D.



Yo que pensaba que OpenGL ya estaba pasado de moda. 


Según the Khronos Group, impulsores de la API para aceleración gráfica 2D y 3D para hardware y que es vital para el desarrollo de juegos y aplicaciones gráficas complejas, la nueva versión de OpenGL, la 4.1, supera en prestaciones a la última versión de Direct3D 11 (anteriormente conocido como DirectX), el API cerrado equivalente de Microsoft.

La nueva versión incluye entre otras cosas un soporte para la compilación en el momento de ejecución de código para shaders, importante para optimizar el código para el hardware; mejoras en WebGL, sistema que permite embeber código para gráficos 3D de OpenGL en páginas web sin tirar de plugins; y una mayor tolerancia a fallos, lo que lo dota de mayor seguridad ante código malicioso.

Nvidia anuncia que implementará la 4.1 para el miércoles y ATI en breve.


Fuente Linux Magazine en Español aquí

viernes, 23 de julio de 2010

Monitoreo de redes con bandwidthd sobre CentOS

Bien, pues una vez adentrado en el mundo de las redes, me corresponde ser responsable por el área de redes de la empresa donde laboro. 
Una parte que no me gustó cuando recibí el cargo sobre este departamento es que la documentación sobre la red es escasa, sólo me fue entregado un diagrama a bloques de quienes estaban conectados a un segmento de la red, sin configuraciones de los Routers, Switches, ni mucho menos se tiene un control de ancho de banda y del tráfico.

Por una parte ya me he dado a la tarea de documentar la mayoría de los aspectos que considero relevantes (también trato de hacerlo como especifica Cisco). En lo que respecta al control del ancho de banda, ya he podido reconfigurar los equipos pero aún me falta establecer la línea de base (baseline). Definiendo como línea de base, al conjunto de características que reunimos sobre la red y que nos proveen una perspectiva de su "personalidad", es decir, así sabremos cómo se aprovechan los recursos de red y en qué tiempos son mayormente demandados.
A recomendación de mi último instructor de Cisco, los puntos fuertes que se requieren monitorear para obtener la información son:
  • Latencia de Ping.
  • Carga de Memoria.
  • Carga del CPU
  • Cantidad de tráfico que atravieza por la interface (FastEthernet, Serial, GigabitEthernet, etc)
  •  Monitoreo por protocolo (TCP/UDP).
  • Una vez que se tiene el monitoreo por protocolo y que se ha descubierto el que genera mayor tráfico, entonces procedemos a monitorear por puertos (es obvio que hay algunos puertos que son mayormente usados para tráfico TCP y otros para tráfico UDP).
 Yo por lo pronto lo implementé sobre Centreon, un front-end web que utiliza como back-end Nagios, pero adicionalmente incorpora algunas herramientas como RRDtool y scripts. En mi caso utilicé Centreon para que trabaje en base a SNMP (Simple Network Management Protocol) y el uso de OID's, pero se puede hacer de muchas maneras; pero existe un pequeño problema con Centreon, el monitoreo de tráfico TCP/UDP lo hace por puerto, y primeramente requiero tener un panorama general del tráfico que atravieza la red, para ello me auxiliaré de la herramienta llamada Bandwidthd, descargable desde aquí.

Procediendo a la configuración, mencionaré que lo voy a hacer sobre CentOS release 5.4 (Final).

  1. Resolver todas las dependencias para compilar desde los archivos fuentes:
    • gcc (yum install gcc, solicitará confirmación para descargar las dependencias, así que contestamos: yes)
    • libpng-devel (yum install libpng-develsolicitará confirmación para descargar las dependencias, así que contestamos: yes)
    • gd-devel  (yum install gd-develsolicitará confirmación para descargar las dependencias, así que contestamos: yes)
    • libpcap-devel  (yum install libpcap-develsolicitará confirmación para descargar las dependencias, así que contestamos: yes)
    • make
  1. Descargamos el programa Bandwidht
  2. Lo copiamos a la carpeta /usr/src, sólo por buenas prácticas y por mantener el control de programas instalados en linux a partir de los fuentes.
  3. Descomprimimos el archivo con: tar xvzf bandwidthd-2.0.1.tgz
  4. Ingresamos a la carpeta bandwidthd-2.0.1
  5. Ejecutamos ./configure
  6. Si todo ha ido bien en el paso anterior, entonces no debería mostrar errores, ejecutamos make && make install 
  7. Con la ejecución anterior, ya tenemos instalado bandwidthd en nuestro linux. En mi caso, la instalación de los binarios quedó bajo el directorio /usr/local/bandwidthd y los archivos web están en /usr/local/bandwidthd/htdocs.
  8. Yo ya tengo corriendo el servicio de apache, y el directorio de los archivos web es /var/www/html con lo cual solo debo de hacer un enlace simbolico para poder acceder desde mi navegador. El enlace lo hice con ejecutar la instruccion ln -s /usr/local/bandwidhtd/htdocs /var/www/html/bandwidht.
  9. Ahora ya podemos ir al navegador e introducir http://localhost/bandwidhtd para poder visualizar las gráficas.
 NOTA: Ahora sólo faltará hacer unos ajustes a las configuraciones del archivo bandwidht.conf el cual en mi caso está localizado en la ruta /usr/local/bandwidhtd/etc/bandwidht.conf

En lo personal creo que Centreon puede ser usado a la par con bandwidhtd ya que la información que ésta provee es de manera genérica sobre el tráfico, asimismo nos proporciona una vista muy general sobre la red.

NOTA: Para quienes leyeron que sé usar Centreon y requieren documentación, déjenme decirles que la información SÓLO ESTÁ EN FRANCÉS, más tarde la publicaré porque debo traducir todo el manual al español y eso va a llevar tiempo. Como consejo les puedo decir que user el traductor de google para hacer búsquedas en francés, así como para traducir las páginas, yo ya lo probé y es bastante bueno, aunque no es perfecto (ojo, también puedo leer en francés).