Home » Technology » Impact of Computer Security to an Organization

Impact of Computer Security to an Organization

by Suleman
86 views

Table of Contents

  • 1. Task Assigned
  • 2. Public Disclosure of Vulnerabilities
    • a). The Fierce Debate
    • b). Two Sides of the Same Coin
    • c). A Thing called “Window of Exposure.”
  • 3. Botnets and Zombies
    • a). The Great Menace
    • b). An Illegal Activity called Botnet Creation
    • c). Botnet Capabilities
    • d). Creating a Viable Defense
  • 4. Document Metadata
    • a). The Perils of Document Metadata
    • b). Why Remove Document Metadata?
    • c). Various Hues of Document Metadata d). Ensuring Data Safety and Confidentiality
  • 5. Adoption of Biometrics
    • a). Biometrics- A boon or Bane?
    • b). What is Biometrics?
    • c). Pros and Cons of Biometrics
    • d). Biometrics and it is Future
  • 6. Bibliography

Task Assigned

The main purpose of this research paper is to explain several issues and highlight their relevance as an urgent business and industrial policy matter. Many significant concerns surrounding perceptions of computer security risks have been taken up for this task, and possible approaches have been proposed to encounter any future problems.

The spectacular growth of the internet and network system has led to an increased awareness of and a keen interest in various security issues that are affecting the entire virtual spectrum. Though most of the internet and networking protocols are designed with the utmost security in mind, many other applications and programs have been created or being planned to develop, with very little attention paid towards the underlying issues of privacy, confidentiality, and security. As we become increasingly reliant on sophisticated internet and networks, expectations of the threat will also increase correspondingly, posing a potential danger to data and information security as a whole.

Impact of Computer Security to an Organization

As and when there are reports of computer security breaches and vulnerabilities, system administrators and business managers tend to publish the vulnerabilities that have affected the system. Full disclosures are provided with an idea to fix the problems; most of the revelations are done to find a viable solution to the problem. Many business managers tend to believe that there will be someone who can suggest the right answer. Many experts may disagree with the public’s full disclosure of vulnerabilities, as they feel they may increase the “window of exposure” (Bruce Schneier, 2000). This analysis attempts to find out issues concerned with full disclosures and their impact on the overall security of an organization.

Experts agree that threats to the Global Internet are experiencing a dramatic change from attacks aimed solely at crippling networks to those targeting individuals and organisations. They also opine that, behind these new attacks, is a large pool of compromised hosts sitting in homes, schools, businesses, and governments worldwide. Such networks are infected with a bot that interacts with a bot controller and other bots to create what is widely called a botnet or zombie army. This research report is designed to highlight the importance of botnets and zombies in internet technology and their effect in real-time on an internet network’s security apparatus.

Every piece of the electronic document contains some form of metadata embedded within the document. Usually such data contains confidential and potentially embarrassing information that may be shared with an unintended audience. This paper also aims to establish how metadata might jeopardize a document creator ‘s confidentiality and how it could be implemented to avoid the transmission of these sensitive users data to the general public. Biometrics is a modern tool for authentication, and this technology is currently used by many facilities and infrastructures that require a high level of safety and security. However, there are two diametrically opposing viewpoints on the viability and conformity of the biometric principle itself. This part of the report attempts to analyze the pros and cons of biometrics technology and also its sustainability in the Future.

  1. Public Disclosure of Vulnerabilities

The Fierce Debate:

Nothing seems to generate more interest and fierce debate, than the full disclosure of vulnerabilities. Though highly controversial in nature and dicey in its approach, the subject of “full public disclosure of vulnerabilities” forces system administrators, business managers, and lawmakers to take a fresh look at its origins, approach, and overall effect on the security of an internet environment. However, there is an intense debate raging among policymakers on the sensitive issue of full disclosure. More often, there are two diametrically opposing views of looking at the core of the problem.

While some of them still believe that there is a real value in finding vulnerabilities (Jason Miller, 2006), others opine that vulnerability doesn’t exist until it is disclosed to the public. Everyone knows that vulnerabilities need to be published, but there are still many nagging questions about vendors, suppliers, and third-party distributors. According to Roger A. Grimes (2005), full disclosure advocates claim that all defects should be publicly shared to benefit the common good. If an exploit is known and not shared, then the vendor might be slower to fix the hole. However, some people might find that disclosing such sensitive information, as vulnerabilities, is somewhat questionable and undesirable.

Two Sides of the Same Coin:

Critically speaking and analyzing, there are two distinct viewpoints on the underlying issues of full disclosures; a view that proposes full disclosure to the public, and the other that vehemently opposes it. Whatever the merits and demerits of the case, it is worthwhile to study the different features surrounding these two contradicting viewpoints.

Advocates of full disclosures still believe that nothing focuses a vendor’s attention than the whole world reading about the exploit and hackers, looking to take advantage of it (Roger A. Grimes, 2005), while others opine that vulnerabilities pose a real threat long before they are publicly disclosed (Jason Miller, 2006). Though most of the blame for existing vulnerabilities are passed onto to software manufacturers themselves, there seems to be no perfect method to create reliable software that is devoid of any weakness. Experts believe that even after the vulnerability is disclosed and reported to a vendor, the systems are still very vulnerable to the issue irrespective of whether someone decides to make the information public (Jason Miller, 2006).

The opponents of “full disclosure” believe that:

  1. It may lead to a full-scale exploit of systems all over the world. Before the full disclosure, the person who discovers the hole is the one, who will exploit the situation.
  2. Before the disclosure, the probability of hacking is quite limited. Once the disclosure is made, the whole world knows about the security hole, and in fact, hackers attempting to exploit the vulnerability may increase manifolds.
  3. Before public disclosure, the number of victims on a given day is quite small, but it may bloat to millions soon after the full disclosure.
  4. Fully calculated and responsible disclosures are better options than a broad exposure without proper planning and a valid strategy.

Whereas, the proponents of “full disclosure” tend to believe that:

  1. In many cases, discovering and researching the existence of vulnerability is an intensive process that may involve a lot of skilled person-hours (Jason Miller, 2006).
  2. According to Jason Miller (2006), independent and small-time researchers could use their talent to research these vulnerabilities to find suitable solutions to fix the hole. This sort of work can be a boon to them with an enormous potential of earning hard cash. While some people may question the ethics of making money on finding out fixes for vulnerabilities, it may not hold real reason on the same ethical ground, as selling a very insecure software with lots of vulnerabilities is itself very “unethical.”

Whatever the pros and cons of “full disclosure,” these two opposing poles agree on one basic fact- “Responsible Disclosures are more reasonable than Unplanned and Hasty Full Disclosures.” 

A Thing Called “Window of Exposure.”

The root cause of immense damage to the user computer system is probably due to the users themselves; evidence upon evidence points out that a large number of computer systems remain non-patched and unsecured over a long period, even after the release of a valid patch (Roger A. Grimes, 2005). On the other hand, some computers are never patched and protected in their lifetime, which may lead to a particular disaster in the Future.

In computer security jargon, full disclosure means to disclose all the details of a security problem that is known (Wikipedia, 2006). Full disclosure requires a vendor to fully disclose to the public information of a security vulnerability, including details of the vulnerability and how to detect and exploit it. Most security experts believe that releasing vulnerability information will result in quicker solutions and faster patch fixes, leading to better security. By providing repairs in record timeframes, vendors and distributors will save their credibility. At the same time, the security factor is improved because of the “Window of Exposure,” the amount of time the vulnerability is open to attack, is reduced (Bruce Schneier, 2000). The basic concept of full public disclosure was used extensively in the early part of the 19th century among locksmiths, with a policy of revealing or disclosing to the general open weaknesses in lock systems (A. C. Hobbs, 1853). Similar methodologies of public disclosures were followed in the early part of 1990 when Software security vulnerabilities were reported to CERT, which would then inform the vendor of that software. Public disclosure of the hole would not occur until the vendor had readied a patch to fix it (Wikipedia, 2006).

Whenever there is a strong probability of full-scale vulnerability in a software product, it creates a series of windows of exposures (Bruce Schneier, 2000). The shape and format of this window depend on:

  1. How many people can exploit this vulnerability? And
  2. How fast is it patched?

Security experts believe that, by making this window small and tiny, one could encounter worst-case vulnerabilities. According to Bruce Schneier (2000), there are five distinct faces of windows of exposure:

  1. Phase one is before the vulnerability is discovered. The factor of vulnerability exists, but no one can exploit it,
  2. Step two is after the vulnerability is found, but before it is announced. During this juncture, very few people know about the existence of the vulnerability, but no one knows to create a credible defence against it,
  3. Phase three happens once the vulnerability is announced. Following the announcement, the perceived risk increases by manifolds,
  4. Stage four involves an active search for a workable patch or fix for the vulnerability problem. At this juncture, the number of people who know about the problem increases at lightning speed, and the concerned vendors may issue a credible patch to counter the problem,
  5. Phase five is the actual phase of stability when the users start plugging their computer systems with the patch/fix released by the vendors.

Damages to the computer systems and internet systems could be mitigated by reducing the window in the space dimension by limiting the amount of vulnerability information available to the public and reducing the window of exposure in time dimensions (Bruce Schneier, 2000). Vulnerabilities are almost inevitable, and as out networks grow and increase in the virtual medium, more complex and more persistent weaknesses will become frequent and common.

  1. Botnets and Zombies

The Great Menace

Global internet network and server systems are poised to be the preferred target for newer varieties and persistent breeds of virtual viruses and Trojans. These attacks are solely designed to cripple and disable the technology infrastructure to those that also aim at creating discomfort to people and organizations. These frightening new breed of vicious attacks can affect day to day business functions and personal activities, causing billions of dollars of loss every year. The sheer magnitude of the botnet is just beginning to unfold in front of policymakers and business managers; recent reports suggest that in 2004, more than 900,000 systems were botnet infected, while CERT has upgraded this figure to more than 100,000 (Householder et al., 2003, and McLaughlin, 2004). Bots are quite real and, a quickly evolving problem, that is yet to be understood completely. In this report, an attempt is made to outline the issue and suggest methods of stopping bots.

On the flipside, scrupulous online thugs may take control of thousands of computer systems and turn them into online “zombies” that can work together to cripple the entire network within minutes (Microsoft Inc, 2006). Botnets are a highly accomplished network of zombie computers and include as many as 100,000 individual computers. The biggest problem faced by the internet user community is finding a suitable solution to avoid the creation of botnets and drive out the intruder by installing sophisticated removal tools.

There are several hidden motivations for creating botnets; a surreptitious hacker has several reasons for creating an effectively managed and controlled botnet. Communications, resource sharing, and curiosity are considered primary factors that lead a hacker to develop botnets (Lanelli and Hackworth, 2005). With an increase in online trading and e-commerce, and due to a significant financial transaction carried out every day over the internet, hackers shift over to dubious, criminal intentions of money laundering and identity theft.

An Illegal Activity called Botnet Creation

A typical computer system network is a busy hub of priceless, sensitive information and data. Even when the existence and immense value of information are not known to a computer user, the attacker knows where exactly those bits of information are located and the ways to extract them, and the mode by which they can be used to make illegal profits (Lanelli and Hackworth, 2005). When a user’s system is taken over by a private installation of Trojan or malware, the invader gains immediate access to the deepest corners of a computer system. In total, attackers may even get the status of a system administrator to start controlling and managing the entire network activities all by themselves. Once they are successful in hijacking the computer system, they will get a liberal success personal and other confidential information such as usernames, passwords, email addresses, and a full range of financial information. Once the attackers gain access to a computer system and its invaluable knowledge, they can even start selling it or trade for a hefty profit. When a computer system’s data is stolen, it is commonly used to perpetrate further online crimes against the business and its managers.

These attacks could include (Lanelli and Hackworth, 2005):

  1. Extortion and Blackmailing,
  2. Social and Cultural Engineering,
  3. Reuse, modify and restructure,
  4. Create additional opportunities to setup further attacks on other connected network resources.
  5. Acquire various computing resources, bandwidth, distribute warez, set up phishing sites, launch spam service and create special networking programs (Web Site Optimization, LLC, 2005)

Creating a dedicated botnet doesn’t need too much effort and requires only minimal skill and talent. There is a massive repository of information readily available over the internet in illegally operating sites, and anyone, including an amateur, could easily access such information without shedding tears. Underground sites and warez sites may share a mutually beneficial list of netblocks, or IP ranges ripe and ready for creating a systematic botnet network (CERT, 1991). One of the main avenues that could be used to create a botnet is by using software vulnerabilities, which may also be leveraged to compromise an entire system. Combinations of numerous vulnerabilities are used to gain control over a network, as a single vulnerability may not be sufficient.

Some of the more common documented vulnerabilities used to distribute malware are (US-CERT, 2003):

  1. VU#568148: Microsoft Windows RPC vulnerable to a buffer overflow
  2. VU$117394: Buffer Overflow in Core Microsoft Windows DLL

Online hacking communities are also very well versed in creating a gullible internet user into taking action they would not usually take (CERT, 1991). Most commonly used internet mediums like email, instant messaging, web browsing, and relay chatting are the favourite hunting ground for social engineering intentions.

Botnet Capabilities

Perhaps nothing is more ingenious than the unlimited opportunities that a botnet offers to its creator. Several thousands of systems could be compromised and made for dancing to the tunes of an online criminal. Utterly illegal and unethical, these actions might jeopardize an entire network of computers into acting like passive online zombies.

A highly sophisticated bot may set up a single action called Distributed Denial of Service Attacks (DDoS). The basic idea behind such a senseless act is to exhaust various resources required to provide a service showing or stopping the ability to perform many legitimate commands or requests (Lanelli and Hackworth, 2005). Bots may also include basic port scanners that try to open and exploit various ports on a computer system, which are mostly performed by highly advanced scanners and autoroute functions. Further, autorooters are also improved in their code architecture to target malware backdoors and weaknesses. Nearly all bots mostly contain many functionalities that allow for FTP, TFTP, or HTTP downloads and execution of binary codes. Of late, Pay per Click marketing is gaining tremendous momentum as a viable system of marketing. By using bots, it is possible to perpetrate frauds on the gullible internet users. Click fraud occurs when a shopper visits an online advertisement site on a pay per click mode of operation. Click fraud activities usually generate hefty revenues and income for online criminals. Bots could also be used to create many server-class services that host phishing sites, malware download locations, spyware data drop sites, and command structures. Software specially designed to act as malware could jeopardize a system by logging keys, capturing screenshots, track browsers, capture pockets, create password logs, search hard disk drives, search for CD keys, attack Windows protected storage, and steal data pockets.

Creating a Viable Defense

System Administrators and Engineers identify three basic approaches for tackling botnets (Householder et al., 2004):

  1. Manage and prevents systems from being infected,
  2. Directly detect and control threads of communication among various bots and between bots and controllers,
  3. Detect the secondary and tertiary features of a bot infection, like propagation and attacks.

The threat perception on the internet medium is undergoing a radical transformation, and researchers and developers must adapt to the new and more advanced global nature of botnet threat. This may require active participation among researchers and developers to devise more sophisticated and foolproof technologies, methodologies, and data analysis to act and eradicate the menacing phenomenon of botnets.

Document Metadata

“What you can’t see can hurt you.” – Gartner Group, Research Note on Metadata in Office, 2003

The Perils of Document Metadata

Businesses and individuals face acute risks from scenarios under their direct command or conditions that they may not be even aware of that place their business interests at perceived harm. One prime example of the latter is the document metadatathose hidden bits of information contained in any Microsoft Office documents, including Microsoft Word, Microsoft Excel, and Microsoft PowerPoint files. Whenever a user creates a report or edits it, or even saves it, a string of metadata is automatically created and embedded within the document. These metadata are automatically transmitted whenever a report is sent across as an email attachment (Sean, 2003). Several features within the MS office utilities like “Comments” and “Track Changes” are useful, if deployed properly, but can cause immense damage to a business upon a willfully wrong usage.

Technically speaking, metadata is simply data about data (Workshare, 2006), or in other words, it is also explained as “information about data” (Tom Sheldon and Big Sur Multimedia, 2001). Metadata contains vital information and details like the title, author, content, location, and date of creation.

Why Remove Document Metadata?

Every time a document is created and edited, document metadata is successfully added to it. However, some of the information stored in the report may be too sensitive to be divulged to outsiders. Such information may tell a lot of things about you and your document properties. The fundamental problem here is not the issue of adding metadata to the document and removing it from the created material, once it has been added. Though these innovative features were included to benefit the user to know more about the document, users may be sending out critical information to others without knowing that they are doing it. The most significant aspect of a Microsoft Office document is that it can even show all those changes that are not accepted yet though they remain invisible to the human eye. By simply pressing “Show Markup View,” one can glance at all those changes that were included in the document (Webshare, 2006).

Though several security features were added to the advanced versions of Microsoft XP, they are still woefully short of features and rely heavily on the user’s manual intervention. For example, under the Security Tab in the Options Menu, Microsoft provides the capability to remove personal details from the files upon saving the document and warns users before printing (Webshare, 2006). However, the sophisticated these features may be, they still pose a delicate problem of using the facility itself, as it is quite cumbersome to look at all those tabs and icons. To make these matters worse, several users may not even know that these features exist. To compound the problem, when a document is transmitted by using Microsoft Outlook, the user would have sent the document’s metadata unwittingly without the express knowledge of the act.

In creating financial spreadsheets and balance reports, MS Excel can also store document metadata within the document. A rapid glance at Fortune 1000 web sites shows us that almost 33% of these sites contain some form of Excel worksheets posted publicly on their websites or other third-party sites for SEC filings (Workshare, 2006). By downloading such sensitive documents, the public could still find out everything about the datasheet, the name of the person who created it, and any discarded edit sheets. Garter Group estimates that “by year end 2005, fewer than 25% of the enterprises will have policies surrounding removal metadata from Microsoft Office Documents.”

The summary line: Document metadata can spell doom to business enterprises- plunging firms into financial risks, loss of competitive edge, and potentially embarrassing situations. 

Various Hues of Document Metadata

Document metadata arrives in many forms and colours; whatever the shape and type they come, they can be perceived as very dangerous to a business (Workshare, 2006). Below is a list of the kinds of metadata that is found in any typical MS Office file:

  1. Document Properties: Document properties highlight several issues connected with the publication of the document like title, subject, author, manager, company, category, keywords, comments, and hyperlink lists.

Associated Risks: People who download such documents can easily use this as a base document ort as a fresh template to create a new report. Such materials can be modified to include extra information. 

  1. Document Statistics and File Dates: Microsoft Word documents include vital statistical information on, when the report was created, the date on which it was modified when it was accessed and printed. Apart from these, there will be additional information like the number of revisions, the person who did them, and other details about the document’s composition.

Associated Risks: This may lead to a total embarrassing situation, when the totals hours billed do not match the total edited time. This may also give vital information about the person who published the document. 

  1. Document Reviewers: Office Word documents also contain a list of users who have accepted or added any track changes. All revisions do remain with the material. It is recommended that the names of the reviewers be removed when taking off track changes.

Associated Risks: People can know about the person who suggested changes to the draft document.

  1. Custom Properties: These are the usual editing settings associated with every type of MS Office files.

Associated Risks: It is quite easy to see the organizational history of the document

  1. Hidden Texts: MS Word documents can contain formatted, hidden blocks of texts that are not visible to the naked eye.

Associated Risks: Hidden text contains many notes and jottings that are particular to the version, and such vital records may endanger the user. 

  1. Comments: To help an online review, all Office documents enable a comment section, where readers can leave a suggestion to the text and the material.

Associated Risks: As the user comments travel with the document, sensitive user-related information will also accompany the master document, thus putting users’ identities to risks. 

  1. Track Changes and Revisions: Any MS Word and MS Excel documents will have a facility to follow the changes made and included in the papers. These are called track changes and revisions, and such modifications will consist of information about the person who made the track changes and revisions. The revision history still exists in the master document, and this could pose a security hazard.

Associated Risks: A third-party user can see all changes made to the document. A whole array of information could also be viewed by clicking on the Track Change Button. 

  1. Headers, Footers, and Footnotes: These sections of an Office document will include and display company logos, icon, and trademark images. Texts and graphics could also be printed on the headers and footers sections.

Associated Risks: Confidential information of the company trademark and secrecy clauses could be made public by this utility. 

  1. Macros: In a Microsoft Word document, if a task is repeated, it can be automated using the Macros feature.

Associated Risks: Macros can divulge a lot of sensitive information regarding the tasks linking to internal databases and file systems. 

Ensuring Data Safety and Confidentiality

To ensure privacy and confidentiality, a business may have to resort to several security-enhancing features like creating a document metadata policy within the organization, enhance document security features, use PDF files, as de facto standards and employ zip features to send out sensitive documents (Bershad and Bandrowsky, 2002).

Biometrics

Biometrics- A boon or Bane?

Of late, Biometrics’s topic is inducing a fierce debate with multiple voices discussing the various advantages and disadvantages of the technology. Right now, Biometric’s field is polarizing and splitting the scholarly community with multiple issues and topics that need an honest appraisal and review. Many of the discussions are bordering on hypothetical, deeply technical, and philosophical questions (Phil Libin, 2006). Though several benefits are attached to the fascinating field of Biometrics, some people are yet to be convinced about its overall applications and usability. Most of us still do not know a lot about this new technology, and whatever we know about Biometrics are few of those very more straightforward things that are compared as an analogy to the term “tip of an iceberg.” Several authors have voiced concern about the authenticity and conformity of the results obtained by these modern techniques (Vaclav Matyas and ZdenekRıha, 2000), while others believe that there is still more time to perfect and finalize this technology (Jain et al., 1999).

What is Biometrics?

Authentication is one of the most widely used forms of security and forms the most basic security mechanism. Biometrics refers to the automatic identification of a person based on his/her physiological or behavioural characters and parameters (David, 2003). According to the same source, this form of identification is favored for various reasons over conventional methods involving passwords and PINs: the person to be identified must be physically present at the point-of-identification.

As Biometrics is another method of authentication factor, it is time to take a detailed look in general compared to other ways of accomplishing the same task like passwords, keys, and electronic devices. Several forms are associated with Biometrics like:

  1. Fingerprint Biometrics: Probably one of the oldest used Biometric technologies that offer simplicity, increased speed of operation, and flexibility.
  2. Hand Geometry: This method usually relies on the print of the shape of the hand and is often used as a verification method rather than authentication.
  3. Facial Recognition: One of the separate and non-obtrusive ways is used widely by law enforcement authorities to reconstruct a criminal suspect’s facial features.
  4. Iris Scan: A highly promising means of Biometrics; this method makes use of the patterns of the coloured parts of the iris. One of the most preferred Biometrics model.
  5. Retinal Scanning: In this method, scanning the blood vessel patterns in the back of the eye is measured. A ray of infrared light illuminates the retina, and the measurement is taken thereof
  6. Voice Recognition: Vocal characters and parameters of a person’s voice are measured and collated with this Biometrics model
  7. Dynamic Signature Verification: This model estimates the speed, direction, pressure, and time of the signature.
  8. Keystroke Dynamics: This unique method measures the speed and strength of typing, the time taken to certain type words, and the time between typing specific pairs of letters.

Pros and Cons of Biometrics

Despite many benefits and advantages, Biometrics does possess some notable disadvantages and pitfalls, with pronouncedly glaring mistakes and structural blunders. This part of the report attempts to compare Biometrics with the other available technologies of the day.

Some of the perceived advantages of Biometrics are (Phil Libin, 2006):

  1. The primary use of this method is to do what they should, which means they can authenticate the user. Being filled to the brim with real human physiological or behavioural characteristics, these characteristics are more permanent and unchangeable.
  2. It is not easy to change the inherent biological authentication parameters and characteristics,
  3. Biometrics objects can never be stolen or misplaced as tokens, keys, cards or other purposes for traditional methods of identification,
  4. Biometrics methods are speedy and lightning-quick in their responses.

Biometrics and its perceived benefits are deployed by many advanced defence and strategic facilities all over the world. Touted as the technology of the Future, Biometrics can also be used as a means to segregate anatomical and physical features based on their differences and variances.

However, even with these benefits and advantages, Biometrics do possess some pressing disadvantages embedded within its beautiful architecture. Though theoretically secure, the biometric platform is not as robust in the practical sense, because technological advancement in the field of pattern matching has not matched the desired levels (David, 2003). This is possibly one of the most valid reasons for Biometrics’s perceived failure to take off as an accurate authentication tool. Some of the recognized pitfalls associated with Biometrics are:

  1. False Acceptance: False acceptance is the defined authentication of unauthorized use by the biometric system
  2. False rejection: This is the rejection of the authorized person by the system caused due to an error in the actual measurement of the user’s bio-characteristic height.
  3. Enrol Failure: This is when a subject cannot register in a system due to weak performance parameters.
  4. Biometric data are not considered to be secret, and the security of a system can’t be based on the secrecy of the user’s characteristics.
  5. Some biosensors have a limited lifetime and need constant monitoring for their performance parameters
  6. The biometrics method of authentication may intrude into and violate a user’s privacy.
  7. The use of Biometrics may also mean that it is a loss of anonymity and can prove troublesome to some users.

Biometrics and it’s Future

Privacy is still an issue with the Biometrics system of authentication. Users may feel very apprehensive about divulging their sensitive, personal information to make available for scrutiny. The user may have to trust a strange, unknown electronic device to give their identification. The potential loss of such data can be very critical to both the user and the business manager. Though recent advances in storage technology have assured the safety of personal records, there are chances that they may be either stolen or tampered by highly sophisticated hacking tools. There are several suggestions to improve the existing biometric authentication tools to make them more non-obtrusive and user friendly. Experts also suggest using an agent called “Observer” that can be tamper-proof and highly secure and cannot communicate with the outside world (David, 2003). Industry experts have also called for the introduction of “cancelable biometrics” modules, which addresses the privacy characteristics by helping in storing modified versions of the biometric data pockets.

This research paper has attempted to bring out various details of using Biometrics as a tool of authentication. Neither exhaustive nor restricted, the deliberations summarized in the article may provide some thoughtful insight on the Future of Biometrics as a tool to authenticate the identity of a user. Hopefully, the recent advancement in Biometrics will continue in its pace and become a standard for future authentication purposes, helping the business and the industry maintain and manage the sanctity and security of their data and facilities (Vaclav Matyas and ZdenekRıha, 2000).

You may also like

Leave a Comment