52 results.
This week we are excited to announce a new privacy-awareness raising project. We demonstrate how websites can detect two aspects of your online behavior:
Websites may collect these pieces of information for various reasons; either to track you, or to learn more about you.
Why? Well, the main goal of online tracking is to identify website visitors across websites. Trackers recognize visitors by reading unique user’s identifier stored in cookies, or by identifying a unique collection of user’s device characteristics: this is called device fingerprinting. Such unique collection of device’s properties, or a fingerprint, can often uniquely identify the user who visited the website. Usually, fingerprint includes technical parameters like what browser and operating system a visitor is using, what timezone she is from or what fonts she has in her system.
Beyond pure technical characteristics, which are not explicitly chosen by the user, users can be identified by more “behavioral” characteristics, such as the browser extensions they installed and websites where they have logged in. Detecting extensions and website logins can clearly make a significant contribution to fingerprinting — and we would not like to arrive to the point, where websites can track us based on our behavior.
This would be especially worrisome for pro-privacy people: the more extensions you install to your browser, the more trackable you are.
There could be more reasons for detecting your extensions and logins, which are beyond tracking (as tracking is mostly used for behavioral advertising and dynamic pricing). For example, a website would like to learn more about you by spying on your extensions and learning whether you have installed an adblock or not. With the method we featured in our test, this can be done even if the extension is disabled for the given page.
A website could also learn about your behavior and (somewhat private) preferences, in case you are logged in specific shopping, dating or health-related websites. Another possible scenario is if you work at a society, institution or a company that you don’t want the world to know. However, if you log in to your company intranet, there is a chance, that it could be detected and your workplace be learned. (Like for people working for Inria this can be detected, at least at the time of writing.) You might also not want to share with arbitrary websites that you are logged in to certain shopping sites, or to more sensitive services concerned with dating or your health.
The goal of our experiment is to change the status-quo by spreading the word about these issues to as many people as possible. This might not happen from one day to another, but we hope it will happen eventually — similarly as it happened for technical fingerprinting attacks, against which regular browsers now take countermeasures.
So, if you are interested, you can check out demo, or you can read to know more about the details.
Browser Extension and Login-Leak Experiment: https://extensions.inrialpes.fr
The extension detection technique exploits that websites can access browser extension resources. For example, a website can try to detect if Ghostery is installed in Chrome by trying to load its images (click to test) or if you have Adblock installed (click to test). These resources are called web accessible resources, and they are needed to provide a better user interface in the browser. In Chrome, extensions have less options to change the UI, thus more extensions use these resources (roughly 13k). In Firefox, extensions have more flexibility to the change the UI, making web accessible resources less common.
For the login detection we use two methods: redirection URL hijacking and we also use Content-Security-Policy violations. Let’s discuss them in this order.
Redirection URL hijacking. Usually, when you try to get access to a restricted page on a website, you are dropped to the login page if you are not logged in already. In order to make your life easier, these login pages remember the URL of the rejected page, and they plan to drop you there after logging in properly. This is where our attack comes in: we change this URL, so you’ll land on an image if already logged in.
More technically speaking, if we embed an <img> tag pointing to the login page with the changed URL redirection, two things can happen. If you are not logged in, this image will fail to load. However, if you are logged in, the image will load properly, and we can detect this, even though we are a third-party site here.
Abusing Content-Security-Policy violation for detection. Content-Security-Policy, or CSP in short, is a security feature designed to limit what the browser can load for a website. For example, CSP can be easily used to block injected scripts on forums. If there is an attempt like that, the resource will not load, and the browser can also be instructed to report such violation attempts to the server backend.
However, we can also use this mechanisms for login detection, if there are redirections between subdomains on the target site depending on whether you are logged in or not. Similarly, we can embed an <img> tag pointing to a specific subdomain (and page) on the target website, just wait if a redirection happens or not (which would violate our artificial CSP constraints).
If you want to protect yourself from websites seeing which extensions you use, the only advice we can give for the moment is to switch to another browser. For example, in Firefox only few extensions are detectable. You could use other browsers too, but we can’t tell which one would be the best in terms of protection: it has not yet been evaluated.
The good news are: blocking login detections is easy — all you need to do is to disable third party cookies in your browser. Some tracking blocking extensions, such as Privacy Badger could also help — but don’t forget: the more extensions you install, the more trackable you’ll be.
I am thankful to Nataliia Bielova reviewing a draft version of this post.
By clicking on the image below, you can open an in-browser de-anonymization simulator for social networks.
Interesting short film:
Film student Anthony van der Meer had his iPhone stolen and the thought that a stranger had access to all of his personal data really concerned him. What kind of person would steal a phone? Where do these phones end up? These were his biggest questions. To get answers, Anthony had another phone stolen from him on purpose, but this time he followed the thief using a hidden app and made a captivating documentary film about the whole process.
“Find my Phone” was possible because of a spyware app called Cerberus. Using it, van der Meer was able to remotely track and control his phone whenever it was turned on and connected to the internet. Anthony listened to the thief’s calls, read his messages, took photos, and even recorded both audio and video. The filmmaker then compressed everything into a thrilling 21 minute documentary movie which highlights how easy it is to spy on someone in the digital age. The video has already been viewed by more than 1.7 million people.
While I was working on a paper recently, I was asking myself the question how to visualize the uniqueness (or anonymity set sizes) in the data. The only visualization that I am still aware of is Fig. 3 in the Panopticlick experiment, which shows anonymity set sizes created by each value of each attribute. This is it:
![]()
While this is a nice figure, it is quite hard to understand it quantitatively, and it can be even more complicated if you want to compare different datasets by using this visualization method. However, it would be nice to understand the state of uniqueness in datasets, especially if you consider different attributes in each case, apply anonymization or other countermeasures to decrease uniqueness.
This is why I started looking for another option, which finally lead to creating a simple, but heavily customizable plotting function I call kmap [code]. This tool can be used for multiple purposes, either if you are a data scientist experimenting or looking for a way that enables explanatory visualization to non-experts. It is useful to
Let's see a nice example based on UCI Adult Data Set. This tabular dataset contains attributes like age, sex or workclass of more than 30k adults. Let's pretend that we are considering releasing this dataset, and we would like to know how many (and which) attributes could be safely released. In order to get a better understanding of this, let's visualize the level of identification (uniqueness) if we release only 3, 6 or 9 attributes of each user. This looks like this with kmap:
| 3 attributes | 6 attributes | 9 attributes |
![]() |
![]() |
![]() |
It is quite easy to tell the differences by looking at the figures: releasing only 3 or 6 attributes is relatively safe (*) as less than 25% of the dataset can be uniquely identified. On the other hand, if 9 attributes are released, that would make almost 75% of users concerned by the release unique.
If you would like to try out kmap for your self, you can find the code and the files for the example in this git repository. Plus, our paper got accepted where this visualization was used, thus more useful examples can be expected.
I would like to hereby thank Gergely Acs, Claude Castelluccia, Amrit Kumar and Luca Melis for their comments while I was developing kmap.
(*) What is safe or not is another question; in some scenarios even having 6% of the users identifiable can be considered as a problem.
Emailing is now a part of our everyday life, and many people are available through email almost 0/24. However, what most of us don't consider is the privacy of emails, as we perceive emails traveling as regular closed envelopes traveling to the recipient. However, in fact, if we don't use PGP or other encryptions tools, our emails can be easily surveilled on their way.
This motivated the creation of TracEmail, a Thunderbird Addon that helps understanding the problem. TracEmail analyzes the source code of the email and makes an estimation on the path the email possibly have taken. Then it puts this path on an interactive map, where the data surveillance regulation of each country can also be inspected. The tool is now available in Thunderbird Addons (at the moment it is under review), so if you are using a Mac or Windows, you can just give it a try. If you are a Linux user, you can contribute to the source to make it available on Linux.
![]()
I've recently read a very nice summary of the advertising wars by Steve Feldman (Stackoverflow), and if you are not up to date on the topic, here is an extract for you:
At this point, it’s pretty clear that ad blocking is a big deal. A recent study suggesting the advertising industry is set to lose over $22 billion in 2015 alone as a result of ad blockers is setting off alarm bells. That is a LOT of money. Companies are scrambling to ‘fix’ the ad blocking problem, as active users of ad blocking utilities hits nearly 200 million. But it’s not just that tiny stop sign in the toolbar raising alarms. Apple caused a panic when they announced that iOS9 would permit the use of ad blockers, as many see mobile ads are an important piece of revenue for the industry.
First, the ad industry went up in arms over ad blocking, offering suggestions like developing ways to deliver specific ads to users employing ad blockers. Then, they considered going after Apple when they announced iOS 9 would permit ad blockers. Later, they began asking users to turn off their ad blockers as a sign of good faith. That did not go so well for some. Finally, they prevented Ad Block Plus from attending an industry event. [...] But some in the industry do get it. Eyeo (the company behind Adblock Plus) outlined in their ‘Acceptable Ads Manifesto’ some strong ideas for how to improve digital advertising-- not to mention the iAB’s L.E.A.N Ads program. While there is criticism for both of these solutions, the positive takeaway is that powerful organizations are finally moving toward addressing the problem.
This looks like things started to change! People are now taking actions to solve the fundamental problems that are became part of the ad world over the years. For this reason, I think the Accaptable Ads Manifest and the LEAN Ads program are good initiatives, but I sense a fundamental problem: privacy concerning problems should be tackled more in details, especially tracking.
These are my proposals in order to fill the real gap:
However, there is one more thing that I personally miss from this, which is granularity of payment. I like to read news from aggregated sources, instead of visiting news sites directly. For this reason, I'd really prefer to pay per news item that I'd like to read, rather then paying a couple of dollars per month to each media where I might read something. I hope there will be such branches, although there already some similar like Google Contributor or Mozilla Subscribe2Web.
This post originally appeared in the professional blog of Gábor Gulyás.
I've recently read an article where the author pictured a future where Google-glass-like products can support our decisions by using face recognition and similar techniques. While the author definitely aimed to picture a 'new bright future', she remained silent about potential abuses and privacy issues. Plus, while the technology has a definite direction toward as she desribed, this still leaves the writing as a piece of science fiction at the moment. But where exactly we are now, and for how long we may be reliefed?
Today, using machine learning (ML) is a hard task. First, you need to get vast amounts of quality data, then picking the proper algorithm, training and using it is also highly non-trivial. Not to mention hardware requirements, as the training requires a lot of computation power, and it takes a while until your application learns understanding the task it is designed to do. This might sound comforting from a privacy-focused aspect, but that would be inadequate to do so.
I see three major issues that could result a change in the state of the art, and I think – for some of these – we are already in the shifting phase:
It is easy to imagine that such ML could provide an exponential amount of privacy-infringing uses (*). However, we should not forget that today the data driven businesses fuel machine learning research and application development. Thus, there are already thousands of services that are built around data and machine learning. As many of these companies use data that was not gathered by user consent (just to mention at least one possible privacy violation), ML is already here to erode further our privacy.
Let's have some examples. BlueCava, a company that uses fingerprinting to track people on the web, is using machine learning to connect devices that belong to the same person. This is just an example; with little effort we could find a miriad of other companies who analyse user behavior, buying intent, fields of interests, etc. with similar techniques. Data that we generate is also at stake: we could think about smartphones and wareable devices, but also the posts we write.
To conclude shortly, machine learning already has a huge impact that should increase incredibly in the next few years. All big companies have their own research groups in the field, and if we are honest to ourselves, we know this is for a simple reason: use machine learning in their products in order to increase their revenues.
(*) I intentionally did not want to add a comment to if machines could became alive. I think here you can read a realistic opinion on the topic.
This post originally appeared in the professional blog of Gábor Gulyás.
As the web lacks nice recaps on how web tracking works and what are the fundamental problems with it, I launched a new website at webbug.eu that aims to fill the gap. Besides describing the state-of-the-art of tracking, it also provides access to our related privacy projects, and fresh and curated news on the topic, too. If you like it, please share it, and if you have comments, don't hesitate to contact!
Note: a Hungarian translation exists at webpoloska.hu, and if you would to provide a translation on your own language, don't hesitate to contact me. I think it could be done in a couple of hours.
This post originally appeared in the professional blog of Gábor Gulyás.
Traditional privacy-enhancing technologies were born in a context where users were exposed to pervasive surveillance. The TOR browser could be thought as a nice textbook example: in a world where webizens are monitored and tracked by thousands of trackers (or a.k.a. web bugs), TOR aims to provide absolute anonymity to its users. However, these approaches beared two shortcoming right from the start. First, sometimes it would be acceptable to sacrifice a small piece of our privacy to support or use a service, second, as privacy offers freedom, it could also be abused (think of the 'dark web'). While there have been many proposals to remedy these issues, none in implementations were able to cumulate large user bases. In fact, in recent years privacy research quite rarely reached practical usability or even implementation phase. (Have you ever counted the number of services using differential privacy?)
Due to these reasons, it is nice to see that things are changing. A company called Neura made it to CES this year, who's goal is to provide a finer-grained and strict personal information sharing model, where the control stays in the hand of the users:
[...] firm has created smartphone software that sucks in data from many of the apps a person uses as well as their location data. [...] The screen he showed me displayed a week in the life of Neura employee Andrew - detailing all of his movements and activities via the myriad of devices - phones, tablets and activity trackers - that we all increasingly carry with us. [...] But the firm's ultimate goal is to offer its service to other apps, and act as a single secure channel for all of a user's personal data rather than having it handled by multiple parties, as is currently the case. [...] We are like PayPal for the internet of things. We facilitate transactions, and our currency is your digital identity.
I am a bit sceptic with this privacy selling approach: that much of data could give too much power for that company, and it is not clear what happens if the data is resold (which happens a lot today). It would be a bit more convincing if you could really own the data, and would have cryptograhpic guarantees for that. Until we have that I rather prefer technology where you could buy yout privacy back directly. Returning to the example of web tracking, there are interesting projects (like Google Contributor or Mozilla Subscribe2Web) that would allow to do micro payments to news sites instead of using being tracked and targeted with advertisements.
Another recent development, called PrivaTegrity, addresses accountability of abuses. The project is lead by David Chaum, who is the inventor of the MIX technology that is an underlying concept in digital privacy. While not all details are yet disclosed, it seems Chaum's team are working on a strong online anonymity solution that could be used for a variety of applications, would be fast and resource preserving (so it could work on mobile devices), and would have a controlled backdoor to discourage abusers. I am sure that this latter feature would initiate a large number of disputes, but Chaum claims that revoking anonymity would not remain in the hands of a single government; nine administrators from different countries would be required to reveal the real identity behind a transaction. Let's waint and see how things develop; however, this is definitely a challenging argument for those who vote on erasing privacy.
Here is their paper on the underlying technology.
This post originally appeared in the professional blog of Gábor Gulyás.
Recently I've had the opportunity to meet Marco Stronati, one of the developers of LocationGuard. In case if this is the first time you hear about this plugin, LocationGuard offers remedy for location privacy in browsers. By default, you have two choices when a website asks your position: either you allow and provide your exact location, or deny (also likely to render the given service useless). This plugin allows you to provide answers in between, only revealing your location roughly, and what makes it even more interesting is that this is not just another home-brew PET, but they have some nice work behind the tool. Finally, it comes with nice default configuration, but setting it otherwise is quite simple and probably easy to use even for non-tech users.
You can also set it to report a fixed location – which is also the feature that motivated the current post. This could be quite useful, and as I know it from Marco, there are some people who use it with a custom place of their own choice (*). However, there is an interesting caveat using a fixed location for more privacy. Basically, the problem is that the world is huge, thus it is very likely that most users will set their fixed location differently. This also means that websites who can access this information can also easily track these users, e.g., just by storing the hash of location coordinates in a tracking cookie.
We call this phenomena the anonymity paradox. This unfortunately happens quite often, when someone is trying to use a privacy-enhancing solution in a unique setting. While this person might have anonymity in theory, but the uniqueness also allows linkability of her actions. This is why TOR developers highly discourage altering their browser, and also why some privacy-conscious users were more trackable in the Panopticlick experiment than others. To simply put, this is like visiting a bank office in a dark suit and wearing a ski mask. You will be anonymous for sure, but also easily trackable, as you'll likely find it out.
Bottom line: you should use the default fixed location, or consider using a custom fixed position until there is a fixed variaty of choices in LocationGuard. For example, as IP addresses reveal the country and city, I think country-level choices of fixed positions would be enough for most users. If you feel that is still too much, then you should use TOR Browser (when it gets fixed), and no LocationGuard. (As far as I know TOR Browser disables location requests by default.)
I am thankful to Luca Melis reviewing a draft version of this post.
This post originally appeared in the professional blog of Gábor Gulyás.
(*) Note: as they don't collect data, they don't have statistics on this. This is just from other forms of feedback.
In this post we discuss a method that allows tracking users of the TOR Browser Bundle (TBB) with the latest release (5.0.4). We believe that this is an important issue for the TBB users to know, as they would expect anonymity by using TBB, but, as we demonstrate below, this remains a false belief under the default TBB settings.
Although this problem is apparently known by the TOR developers [6], we decided to post our findings due to the following reasons. First, we believe that such a vulnerability should be more clearly communicated to the TOR users. Second, there is a simple workaround that most users can adopt until a patch is delivered by the developers.
TBB is an anonymous browser, thus TBB adopts several measures to make user activities non-trackable, or unlinkable to non-TBB activities. One way for a website to track the activities of a browser is to detect the available fonts on the system. (This is exploited by real-world trackers.) The set of installed fonts is typically highly unique, and it has been shown that it is one of the most unique properties that a browser can have [1]. Even more, fonts can be used to track the OS/device itself [2].
The TOR developer community has already been aware of this problem, and some countermeasures have also been taken: they introduced a limit on the number of fonts a website can load [3]. Due to implementation difficulties, experimental countermeasures have been tested in the alpha release [4], but this seems to be omitted from the current stable version. However, we found that none of these measures work currently, leaving TBB users vulnerable to font-tracking attacks.
It can be easily verified if somebody is vulnerable to the attack or not: we only need to visit a website that obviously loads more than 10 fonts, and if it is successful, we have a problem. For example, you can visit this site [7] and check how many fonts it can load. Alternatively, cross-browser fingerprinting sites [5] can be used to test this attack more systematically.
In the following two screenshots, we compared the detected fonts on Linux and on OSX using TBB (left), and also using a regular browser (right). As you can see, more-or-less the same fonts are detected, which shows that TBB can be tracked across multiple sites, and activities within TOR can potentially be linked with activities outside of TOR.
In the following screenshot we show that the list of installed fonts can also be inferred regardless of the privacy settings in TBB. The highest setting, which provides the strongest privacy protections in TBB, seemingly prevents tracking as it disables JavaScript on all sites. However, this is not perfectly the case: arbitrary fonts can be still loaded by CSS.
The CSS font leakage can be checked in our demonstration here [7].
Fortunately there are two things that we can do about this. The better solution is to disable the browser to load any fonts except four of them. This can be done by opening the advanced font settings window (Settings > Content > Advanced) and unselecting the option that “websites could choose fonts on their own”. This will provide sufficient protection with all of the four privacy levels that TBB offers. The other possibility is to use the highest privacy setting offered by TBB, but that will further degrade user-experience, and as discussed above, it is not bullet-proof.
This setting could help with preserving anonymity while waiting for the new stable release to deliver a working solution. (That would desirably also cover the vulnerability against another type of fingerprinting [8].)
Gábor Gulyás, Gergely Ács, Claude Castelluccia
EDITED (2015-12-01): Typekit example removed (our example is enough now).
[1] In the Panopticlick experiment fonts alone measured a 13.9 bit entropy over 286,777 users. After plugins, it was the second most unique property of browers. The paper is available here: https://panopticlick.eff.org/browser-uniqueness.pdf
[2] Fonts could be extracted in a way that allows cross-browser fingerprinting. Paper: http://gulyas.info/upload/GulyasG_NORDSEC11.pdf
[3] If you are using TBB, and open about:config, you’ll find two TBB specific settings on this called as browser.display.max_font_count and browser.display.max_font_attempts.
[4] Check here: https://blog.torproject.org/blog/tor-browser-50a4-released
[5] http://fingerprint.pet-portal.eu
[6] A workaround was suggested here: https://trac.torproject.org/projects/tor/ticket/5798#comment:13
[7] CSS-based font tester: http://webpoloska.hu/test_font.php
[8] Further information can be found in the related ticket and article on the subject.
https://trac.torproject.org/projects/tor/ticket/13400
In a previous blog entry, I described how random forests could be used to predict the level of empirical identifiability. I have also been experimenting with neural networks, and how this approach could be used to solve the problem. As there is a miriad of great tutorials and ebooks on the topic, I'll just continue the previous post. Here, instead of using the scikit-learn package, I used the keras package for modeling artificial neural networks, which relies on theano. (Theano allows efficient execution by using GPU. Currently only NVIDIA CUDA is supported.)
The current setting is the same described in the previous post: node neighborhood-degree fingerprints are used to predict how likely it is that they would be re-identified by the state-of-the-art attack. As I've seen examples using raw image data for character classification (as for the MNIST dataset) with a Multi-Layer Perceptron structure, I decided to use a simple, fully connected MLP network, where the whole node-fingerprint is fed to the net. Thus the network is constructed of an input layer 251 neurons (with rectified linear unit activation, or relu in short), a hidden layer of 128 neurons (with relu). To achieve classification, I used 21 output neurons to cover all possible score cases in range of -10, ..., 10. Here, I used a softmax layer, as an output like a distribution is easier to handle for classification. See the image below for a visual recap.

I did all the traning and testing as last time: the perturbed Slashdot networks were used for training, and perturbations of the Epinions network were serving as test data. In each round with a different level of perturbation (i.e., different level of anonymization or strength of attacker background knowledge) I retrained the network with Stochastic Gradient Descent (SGD), using the dropout technique – you can find more of the details in the python code. As the figure shows below, this simple construction (hint: and also the first successful try) could beat previous results, however, with some constraints.
![]()
In the high recall region this simple MLP-based prediction approach proved to be better than all previous ones. While for the simulation of weak attackers (i.e., small recall, where perturbation overlaps are small), random forests obviously are the best choice. You can grab the new code here (you will also need the datasets from here).
This post originally appeared in the professional blog of Gábor Gulyás.
Measuring the level of anonymity is not an easy task. It can be easier in some exceptional cases, but that is not true in general. For example in an anonymized database, we could measure the level of anonymity with the anonymity set sizes: how many user records share the same properties, which could make them identifiable. (And here is the point where differential privacy fans raise their voices, but that story is worth another post.) However, this is much harder if we think about highly dimensional datasets where you have hundreds of attributes for a single user (think of movie ratings, for example).
In 2012, we demonstrated that the OS can be fingerprinted by checking the presence of a greater variety of front (hey, we also have a paper on that). In addition, we showed this by using JavaScript only that was running from a website. This project seems to have more detailed results on this issue, as the authors went further than checking the presence of of a font: they checked how characters are rendered with a given font in different browser. This surely gives more details than 0/1, and according to their results they could use this information solely to make 34% of their submissions uniquely identifiable:
We describe a web browser fingerprinting technique based on measuring the onscreen dimensions of font glyphs. Font rendering in web browsers is affected by many factors—browser version, what fonts are installed, and hinting and antialiasing settings, to name a few— that are sources of fingerprintable variation in end-user systems. We show that even the relatively crude tool of measuring glyph bounding boxes can yield a strong fingerprint, and is a threat to users' privacy. Through a user experiment involving over 1,000 web browsers and an exhaustive survey of the allocated space of Unicode, we find that font metrics are more diverse than User-Agent strings, uniquely identifying 34% of participants, and putting others into smaller anonymity sets. Fingerprinting is easy and takes only milliseconds. We show that of the over 125,000 code points examined, it suffices to test only 43 in order to account for all the variation seen in our experiment. Font metrics, being orthogonal to many other fingerprinting techniques, can augment and sharpen those other techniques.
We seek ways for privacy-oriented web browsers to reduce the effectiveness of font metric–based fingerprinting, without unduly harming usability. As part of the same user experiment of 1,000 web browsers, we find that whitelisting a set of standard font files has the potential to more than quadruple the size of anonymity sets on average, and reduce the fraction of users with a unique font fingerprint below 10%. We discuss other potential countermeasures.
The Tor Project, Inc. is the non-profit organization behind the popular anonymity and privacy tool Tor. This organization is currently undergoing a worldwide search for an Executive Director.
Description:
http://data01.wentco.com/openreq/Requisition.aspx?ReqID=67528129
If you are, or you know of someone who is, interested in the position,
please contact:
Judy Tabak
The Wentworth Company
479 West Sixth Street, San Pedro, CA 90731
(310) 732-2321
JudyTabak@wentco.com
Doctoral Studentships: Trusted Environments for Privacy-Preserving
Analytics
Department of Computer Science, University of Oxford
Supervisors: Professors Andrew Martin & Andrew Simpson
Start Date: October 2015
We invite applications for two studentships funded by the Intel
Corporation for a project called "Applying the Trusted Remote
Environment (AppTRE)". One student will study how Trusted Computing
Architectures based on Intel's new SGX technology can be used to
implement "Trustworthy Remote Entities" with strong guarantees of
privacy protection. The other student will study algorithms and
approaches to data analysis which can run in such contexts, processing
privacy-sensitive data without unwanted disclosures.
The studentships are tenable from October 2015, for three years in each
case, subject to satisfactory progress. In special circumstances, it
may be possible to delay the start date. The annual stipend payable is
£17057. The studentship also covers the payment of College and
University fees at the home/EU rate.
More details here: http://www.cs.ox.ac.uk/news/944-full.html
The Katholic University of Leuven offers a PhD postition within a project focused on developing an anonymous communication infrastructure for mid-latency, message based communications, ideally from September 2015. See project description and contact details below.
Project Description
PANORAMIX is an EU H2020 project that aims to develop a multipurpose infrastructure for privacy-preserving communications based on "mix-networks" (mix-nets) and its integration into high-value applications. Mix-nets protect not only the content of communications from third parties, but also obscure the identity of the senders or receivers of messages, through the use of cryptographic relays. Mix-nets are necessary for implementing strong privacy-preserving systems and protocols. PANORAMIX aims to realize, integrate and demonstrate the use of an infrastructure for mix-nets in the context of three high-value applications. The objectives are: (1) Building a Mix-Net Infrastructure, creating a mix-network open-source codebase; (2) use this infrastructure to implement private electronic voting, where anonymity is necessary to guarantee ballot secrecy, and verifiability is needed for holding fair, transparent and trustworthy elections; (3) apply the infrastructure to privacy- aware cloud data-handling, in the context of privacy-friendly surveys, statistics and big data gathering protocols, where protecting the identity of the surveyed users is necessary to elicit truthful answers and incentivize participation; and (4) apply the infrastructure to privacy-preserving messaging, where two or more users may communicate privately without third parties being able to track what is said or who-is-talking-to-whom.
The project consortium includes leading academic and industry partners, including KU Leuven, University College London, University of Athens, University of Tartu, and SAP.
Research Topic
Modelling, design, analysis, and implementation of anonymous communication systems. The first task of the student will be to identify the feature set, security and performance tradeoffs in mix-nets. Secondly, the student will investigate methods to efficiently and securely anonymize mid-latency, bidirectional message-based communications. This involves analysing the robustness of mix-net designs towards a variety of adversary models and attacks, and proposing secure designs and configurations. Further, the student will investigate how differential privacy definitions and mechanisms can be adapted to the context of mix-nets. The student will also contribute to the implementation of the developed methods in a mix-net infrastructure, as well as to the implementation of a privacy-enhanced messaging application that runs on this infrastructure.
Profile and Skills Required
The candidates should hold a master degree in Mathematics, Engineering, or Computer Science have good grades and have a keen interest in computer security and privacy. Fluency in English is an absolute must. Preferably to have passed courses in Cryptography and/or Computer Security. The applicant should be a team player with the capability to work in an international research team. The candidate should be prepared to deliver high quality research results, attend project meetings with industry partners abroad, work according to tight deadlines and write project deliverables.
To apply, please send following documents (in pdf) to jobs-cosic@esat.kuleuven.be.
• Curriculum Vitae
• Motivation letter
• List of publications
• Relevant research experience
• Study curriculum with rankings
• English proficiency
• Pdf of diploma and transcripts (translation if the original is not in Dutch, English, French or German)
• 1 page research proposal describing which research questions you would like to work on
• Names (and e-mail) of 2 reference persons and the nature of contact with them
Contact: Claudia.Diaz@esat.kuleuven.be
Webpage group: http://www.esat.kuleuven.be/cosic/
Application page: http://www.esat.kuleuven.be/cosic/?page_id=401
Gábor György Gulyás, founding Editor-in-Chief of the International PET Portal and Blog successfully defended his PhD thesis titled "Protecting Privacy Against Structural De-Anonymization Attacks in Social Networks" earlier today. The peculiarity of the public defense held at the Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, was that not only the dissertation had been written in English but the defense procedure has also been conducted in English, and one of the two opponents, Julien Freudiger participated in the event from the US through Skype connection. The doctoral committee accepted the thesis and the oral presentation with the highest grade and awarded the PhD degree to the doctoral candidate.
A study publishing the results of the doctoral research can be accessed here.
UPDATE (2015-05-15): the dissertation is available online here.
2015 International Workshop on Privacy Engineering – IWPE'15
Co-located with 36th IEEE Symposium on Security and Privacy
MAY 21, 2015 AT THE FAIRMONT, SAN JOSE, CA
Website: http://ieee-security.org/TC/SPW2015/IWPE/index.html
CfP: http://ieee-security.org/TC/SPW2015/IWPE/topics.html
Deadline: 23 January, 2015
The 2015 International Workshop on Privacy Engineering is dedicated to privacy engineering research. Engineers are increasingly expected to build and maintain privacy-preserving and data-protection compliant systems in different ICT domains such as health, energy, transportation, social computing, law enforcement, public services; based on different infrastructures such as cloud, grid, or mobile computing and architectures. While there is a consensus on the benefits of an engineering approach to privacy, concrete proposals for processes, models, methodologies, techniques and tools that support engineers and organizations in this endeavor are few and in need of immediate attention.
The IRISS (Increasing Resilience in Surveillance Societies) international research project, supported by the EU, has published a handbook on resilience towards surveillance, with the contribution of the Eotvos Karoly Policy Institute (EKINT). Part One of the handbook provides some background on the characteristics and undesirable impacts of surveillance societies; Part Two lays out a set of questions addressed to decision-makers, consultancies, service providers, civil organizations and the general public in order to provoke consideration of proposed or existing surveillance systems, technologies, practices or other initiatives; Part Three offers a list of measures – including the use of Privacy Enhancing Technologies – that can be taken to improve the present situation and to minimize adverse impacts of surveillance on the individual, groups and society.
The online version of the Handbook is available on the IRISS project website, and downloadable as a PDF file.
A full-time position for doctoral students has been announced by the University of Regensburg for the period 2014-2018 in the research area "Security and Privacy in Smart Environments". The project is run by FORSEC, a research consortium of German universities and research institutions. Although no deadline has been announced, the job already starts in the summer of 2014, so if you are interested, submit your application soon to heike.gorski@wiwi.uni-regensburg.de
We have discussed in our previous posts that companies track their users and the traffic on their website. Also there are companies offering solutions to track visitors. So the question is how much would you charge for a list of all the sites you visited in the past two weeks? At a first glance, this might seem a simple question. We have some surprising data for you!
In recent years the majority of websites adopted a business model in which you get a seemingly free service, but in exchange you give up your privacy. The model works simply: while you enjoy surfing freely, you are also being monitored and profiled in order to get advertisements and prices tailored to your interests. For example Orbitz steered Mac users to pricier hotels, this can also happen with you in other contexts, according how the advertisers estimate your affordability.
Auctions, where you are the product
When you open a website which has advertisements slots, there is the chance that your browsing history will be sold at an auction for advertisers, and your device will be involved in a real-time bidding (RTB) procedure. Do you remember our question in the previous section?
Have you considered your price?
Well, just to help positioning yourself, it is estimated that most of us would trade our privacy only in exchange of 7 EURs on average. Sounds nice, right? Unfortunately, this is just an unreal dream: our browsing history is being sold typicallyfor less than 0.0005 USD, as French researchers revealed in their recent study.
Who is making business and how?
When you open a website that has its incomes from advertisements (for instance nytimes.com), a slot on their site can invoke an auction. Next, an ad exchange (e.g., DoubleClick or Facebook) will offer bidders to propose a price for placing their advertisements. The ad exchange identifies you with a tracking cookie (on nytimes.com), and distributes your browsing history among bidders, who will then have a chance to merge it with what they already know about you (tracked with another cookie). Thereafter, bidders have all the information to consider a price tag for you, and the bidder offering the highest price gets the chance to display the actual advertisement. This is a well-designed system, right? Also note that even loosing parties get a copy of your browsing history.
Price-tag sensitivity
Olejnik created tools with his collaborators to detect RTB and analyze winner prices. It may be impossible to get a global overview, as in many cases the winner prices are encrypted. Their analysis is based on the rest. It turns out that different visitor properties steer prices significantly. Location is one of the strongest factors, e.g., a profile located in the US had a price of 0.00069 USD, much higher than others located in France (0.00036 USD) or in Japan (0.00024 USD). They also discovered, that profiles are worth more in the morning. For instance, in their investigation a US profile was worth 0.00075 USD in the morning and 0.00062 USD in the evening. Not surprisingly, browsing history also altered prices significantly. New profiles with no records are worth the least, while others with interesting history of visiting webshops (e.g., jewelry site) are worth more.
What can I do about this?
Using ad-blocks is only a partial solution. Use web bug killer instead. Web bugs are small programs advertisers use to detect user presence and to monitor activities. If you are a Firefox or a Ghostery user, you could use for instance Ghostery.
This post originally appeared in the Tresorit Blog.
In its judgment of 8 April 2014 the Court of Justice of the European Union (CJEU) declared the Data Retention Directive (Directive 2006/24/EC) invalid. The Directive provides that communication service providers must retain traffic and location data as well as identification data of all subscribers and users for a period of six months to two years.
Two national courts, the High Court of Ireland and the Constitutional Court of Austria (the latter initiated by more that eleven thousand (!) petitioners) turned to the CJEU asking the Court to examine the validity of the directive with respect to its interference with the fundamental rights to respect for private life and to the protection of personal data. The CJEU states that although the Directive satisfies an objective of general interest, namely the fight against serious crime, it has exceeded the limits imposed by compliance with the principle of proportionality, and has other serious deficiencies with respect to the requirements of EU law.
This report (in Dutch) gives an overview of best practices and best technologies in the field of PETs. The report is the result of a desk research performed by the PI.lab for the Dutch Ministry of Economic Affairs. A number of categories have been divided in order to structure the overview. A rather broad approach has led to a combination of technologies that are (relatively) widely used in practice, as well as more academic experiments that have not yet found their way to the market.
Download the full report here (in Dutch).
Tags: -
In our previous post on the importance of privacy we highlighted why we believe that it matters, how has our view changed on the issue in the past few decades. In this post we would like to share some more insights, who could be potential threat to your privacy,
Intruders of online privacy – who are they and what do they do?
One of the main problems is that no one have a clue who is conducting surveillance (in more professional terms: there is a lack of the proper attacker model) and what are their reasons of collecting information. However there are a few outstanding, widely known issues, government surveillance is surely such a thing, especially since the PRISM-case.
Many governments – similar to the one in question – sacrifice (a lot of) privacy in exchange of (some) security; for instance, the Data Retention Directive in the EU regulates what information telecommunication companies need to retain in order to help governing forces combating terrorism. Although it is put into practice by most member states, we know little about exact implementations of the directive over fulfillment of surveillance obligations exact technical details at involved telecommunication parties seem to be white spots of the process.
While this type of mass-surveillance has less effect on individuals (except for the ones under targeted observation), it is problematic because it can be executed secretly leading to potential abuses (like it happened in the US), and the secrecy around the implementation can loosen democratic control over these operations (as in the EU).
Meanwhile, surveillance committed for commercial purposes have a rather significant impact at a personal level. This kind of activity includes various actors, ranging from large service/platform providers selling out the data of their users (are you on Facebook?), to marketers using personal profiles to steer their business decisions. For example if you have ever surfed on the net for the best priced plane tickets and watch them going up and down – you may be familiar with behavioral advertising and dynamic pricing. Although there are clearly some legal applications for such uses of profiles (especially if they were collected and used with consent), most are not beneficial for the data subjects.
Thus, these companies get the chance of undetectably influence our choices. Like in the case of Orbitz offering Mac users more expensive hotels, or when it turned out that how ‘bad’ friendships (on social networks) can affect the credit score of someone. Besides, it is also wise to think about others who can access our data and use it occasionally, e.g., as auxiliary data during a job interview.
A lost battle vs. reasons to act for your privacy
At the time of writing, owing to the continuously emerging revelations of the Snowden case we know more and more details of NSA surveillance affecting most people throughout the globe. However, there is probably a lot more to come and it is also likely that the security industry will significantly change soon – so keep that in mind while going on with reading.
Until the fall of 2013 we learn that despite the number of experts NSA employees or the extent of hardware it has, the agency rather seeks cooperation with companies and service providers all over the world to build its own backdoors into software and services. At the same time the NSA possibly influenced the creation of standards and protocols, and enactment of a law was also planned in order to have access to arbitrary other companies (though it was pushed by the FBI).
Fortunately, according to the revealed documents, following a few simple guidelines can make mass surveillance harder, and can help us to be safer online. We still have strong cryptography to rely on, and using open source software is also crucial to succeed. Regularly overviewed open source software is less likely to have embedded backdoors, and if we use standardized protocols, other parties have less chance to influence parameters and stuff (or use software that would do so).
Reinforcing against commercial parties should be done accordingly: while it is difficult to avoid all kinds of surveillance, we can make the duty of the watchers so hard and expensive that we can pass under the radar for most of them. As these companies have several limits regarding founding, technological expertise, etc., usually fighting against a small resisting group simply isn’t worth it. In addition, going for wholesale surveillance is not always a valid business goal for many.
So this is not yet over – take the first steps!
Privacy is not just about revealing secrets – it is far more complex than being a form of secrecy. Your privacy can also be invaded even if no secrets are revealed, implicating that privacy is very sensitive to technological innovations and changes. For instance, someone having a public micro blog on a specific topic (e.g., French cuisine or sports) may not reveal information about the personal life of the author. Meanwhile, timing of the messages and location information attached to tweets can be used to correlate daily routine and other habits. Thus we should be alert of the privacy implications of new technology while it continues reshaping our everyday life.
This post originally appeared in the Tresorit Blog.
The recent decades speeded up, twisted and completely changed the world. Modern technology not only reshaped the societies we live in, but it also undetectably pervaded our everyday life to change our ways of thinking.
If you are an average smartphone user you probably downloaded around 10 to 100 app in the past few weeks, to track your workout performance, record your spending, manage photos, follow the most important happenings in your network, kill time with the latest (and coolest) game and so on. This is just a single device of those you are using. Significant information we give out on where we are, what we do and what we probably think are accessible today for many parties, and in addition, we often voluntarily provide supplement to data being collected
This process could be described from many aspects. However, the overview of web tracking techniques makes an outstanding example on how the profiling based market extended tremendously over the years (e.g., behavioral profiling), and how conscious webizens engaged trackers in a seemingly never ending circulation: finding the way to avoid tracking and discovering new tracking mechanisms.
While general concern for online privacy was continuously growing, recently leaked NSA documentation revealing world-wide wholesale surveillance gave a boost to the rise of awareness. Despite the fact that we arrived to a positive landmark, there are still several white spots on the map of privacy and yet many false-beliefs surrounding the topic.
I have nothing to hide – why should I care?
There are several typical phrases denying the need for privacy that often emerges from the media. Probably the most frequently used one states that “if you have nothing to hide, you should be not worried” (and similar ones with different wordings). Eric Schmidt is also famous for quoting this, while now he yet seems to be seeking privacy himself.
First of all: is privacy about hiding something? Definitely not. Bruce Schneier gives a few good counterexamples such as the need to “seek out private places for reflection or conversation”, “sing in the privacy of the shower”. We could think of sharing moments with the ones we love, or seeking loneliness to find ourselves. There are several other private moments in everyday life to choose from.
Privacy is also important as a basis for the freedom of speech. Dictatorships in the twentieth century showed us that if privacy is omitted (e.g., by allowing targeted surveillance on people disagreeing with the system), this will react in changing individual behavior and public speech. By looking it this way, we can see how privacy means freedom, why it is a basic human need.
Beside some level of secrecy, privacy is also includes control for disclosure among several other things (e.g., “my house my castle”). Daniel Solove quotes pretty good replies from his readers to the misunderstanding in question:
Another problem that such an attitude can justify uncontrolled surveillance. If information is collected without a defined purpose, it can be easily abused. Definition of what is good or wrong can change over time, and what was once collected can be even used to condemn data subjects if it is in the interests of the currently governing forces. This implies several other questions. What data would be stored on you and for how long? Who could access it and make copies of it?
In addition, mistakes can happen anytime. For example, your financial records can look misleadingly suspicious, sufficiently convincing for the tax office to investigate you. Or your data can be leaked accidentally or hacked. In this case, how could you tell what is out there in the public? What is once out there, it stays there.
There are other public voices stating that we don’t care about privacy anymore or don’t simply need it in the digitalized age we live in. However, research shows that even new generations do care about privacy, though for them privacy is more about control. This might be unexpected regarding the strong influence of new technology on their lives, and propaganda of technology companies trying to have the young generation more engaged with their products (not surprisingly: most of their business models rely on controlling vast amount of user-related data).
We got a little enthusiast on the topic, so by end of the process we found a [tltr] alert flashing over the post. This is why we break this down to two pieces. The second will follow in some days. We reward your patience, by adding a super useful e-book on privacy, with many tips and tricks. Stay tuned, it is going to be great!
This post originally appeared on the Tresorit Blog.
In the last ten years online advertising has grown tremendously, especially personalized advertisements concerning user behavior, called behavioral advertising. According to the estimation of the Interactive Adtvertising Bureau just in the United States internet advertising revenues reached $36.6 billion in 2012. In parallel, a myriad of techniques emerged allowing to detect the identity of surfing webizens in order to profile their preferences and interests. The simplest and yet most widespread identification method uses web bugs and tracking cookies, when a tracker service places unnoticeable small detectors on several websites allowing him to store and read identifiers from the computers of the visitors. Application of cookies allows servers to deliver a page tailored to a particular user, or the page itself can contain some script which is aware of the data in the cookie and so is able to carry information from one visit to the website to the next.
Owing to the large scale deployment of web bugs, user consciousness has also risen: many users set their browsers to reject cookies or quickly extinguish them. Other trends like the expansion of smart phones phones, which are taking an increasing part of the Web usage, also caused problem for marketers, since smart phones do not use cookies.
These changes are forcing trackers to develop novel techniques, such as fingerprinting, i.e. when characteristic attributes are used for identification rather than storing identifiers on user-side. In the academic era, the Panopticlick project was the first in 2010 to show that by using Flash or Java plugins browsers can be precisely fingerprinted. Later in 2011, Hungarian researchers pointed out that plugins are not even necessary for tracking, as font list can be detected from the browser directly, and the list is browser-independent for both Windows and Mac OSes (you can test the underlying principles on your own computers).
The tracking market also went along a similar direction quite rapidly. In the beginning of 2012, one of the leading fingerprint-based trackers advertised itself for the European market with device fingerprinting, emphasizing that their method is compatible with local law making the use of tracking cookies difficult (as it doesn’t need cookies at all). Today, leading fingerprinting companies offer services that go even beyond device fingerprinting: they recognize and connect devices that are likely to belong the same person, such as smart phones, tablets and laptops.
A recent paper that appeared at the IEEE Symposium on Security & Privacy reveals more details on the penetration and functionality of these companies. One of the most interesting finding is a rather low utilization rate on top sites, namely 0.4% in the Alexa top 10,000. However, the authors still found thousands of less relevant sites utilizing fingerprinting techniques, from which most were categorized as malicious, or spam (though one could expect regular business sites to do so).
Regarding their functionality, they found that tracker services use Flash and JavaScript for font detection, plus use Flash for additional tasks, such as obtaining system information, multi-screen resolution, or even for circumventing proxy protection in order to reveal the real IP address of the visitor. Some trackers go even further, using custom DLLs to gather more information from the registry (being a bit spyware-like), while others encrypt client identifiers to put themselves into a central, unavoidable position.
While fingerprinting is not widely adopted yet, and serious development is missing for protective technologies, the cat-and-mouse game seems to have begun in the area: tracking companies will likely outrun protective technologies as they get to the current level of the state-of-the-art fingerprinting techniques. Researchers predict that in the near future a shift is expected from the tech-based fingerprinting to biometric fingerprinting, opening new challenges for the privacy-enhancing research community.
This post originally appeared in the Tresorit Blog.
This post is about the story of FireGloves. If you don't have time to read it, the short summary is: FireGloves will not protect your privacy from being fingerprinted. For the details, please continue reading.
FireGloves is a demonstrational Firefox extension that was created by a small team of researchers at the Budapest University of Technology and Economics in order to show that it is possible to defeat system fingerprinting (if you are new to the topic, read about fingerprinting here and here). At the time being it was developed (started at the end of 2011), there were no tools, even no proposals how to defeat fingerprinting. We only had a few ideas how fingerprinting techniques could work, and there were a few companies offering fingerprint-based tracking services. So we decided to create a simple tool that can show that fingerprinting can be avoided with a little loss of user experience. That was FireGloves.
(For the sake of completeness, I must mention that the Tor Browser Bundle developer team also proposed a solution in parallel, which was later compiled into their product. It was rather a simple but long standing solution: they introduced some options to limit the number of fonts what a website can load. I also made a suggestion to enhance their proposal.)
In April 2012, we introduced a new fingerprinting test demonstrating the capabilities of these techniques at a press event. FireGloves was also shown, demonstrating that we were looking for a solution, and not interested in exploiting user privacy. (For the curious reader: recent research makes it clear that the fingerprint-based tracking industry went along the direction we suspected. We also have a recently published book chapter including further predictions becoming reality.) FireGloves was successful at that time: after testing it against one of the leading fingerprinting companies, it was able to circumvent tracking.
However, times changed. Our development team dissolved in September 2012, FireGloves was no longer developed. Although we clarified that FG is a plugin of demonstrational purposes, it had almost 2k users constantly, and we also received a few bug reporting and support-requesting emails every month. What really urged writing this post is the wide publicity FG gained in August 2013: many users adopted the plugin in the hope of getting some protection, making a false sense of privacy. However, I must mention that we are grateful for the sites writing about FireGloves, since this publicity also raised the awareness on a very important and unsolved issue. So: thank you! :-) [Links to some of these articles can be found on the Hungarian press coverage page.]
One of the main things why FireGloves gained visibility, that it is the only known extension of its kind. This is because fighting fingerprinting is not easy, and several aspects of protection need to be considered. Which is perhaps too much for a single extension. Secondly, probably because the achievements of FG on fingerprinting tests can be misleading (both on the Panopticlick and Fingerprinting 2.0 tests). For instance, in this video it is demonstrated that FG decreases traceability greatly. In fact, what is shown is that it is possible to protect ourselves against the vulnerabilities what these tests (and fingerprinting trackers at those times) exploited. However, fingerprinting techniques evolved since these tests were created. Thus to have an up-to-date protection FG would have also needed to be upgraded constantly.
In my opinion, it is not pointless to fight fingerprinting. To the contrary: the more users support anti-fingerprinting, the better these solutions will get. But where to look? The greatest tools currently available are the Tor Browser Bundle and the JondoFox anonymous web browsers. These are made by professionals, and include customized portable Firefox browsers. These are even modified at the source level, and include the most important extensions that one would need. (Beware! If you use too much of extensions, you loose privacy. Check out our book chapter for details, and read about the anonymity paradox.)
Thank you for reading so far, and I hope you find this writing useful. Meaningful comments are welcome.
Oh, and if you are motivated to continue developing FireGloves, you'll find the source code on GitHub! Please let us know if you have any modifications done! I’m sure it is worth the effort.
LiSS [LiSS website] was the first international multidisciplinary academic programme (2009-20013) to consider issues relating to everyday life in surveillance societies. LiSS was administered by COST [European Cooperation in the field of Scientific and Technical Research] and supported by the EU [EU Framework Programme], and attracted over 150 experts from 26 countries. At its Budapest event series in October 2013 Gabor Gyorgy Gulyas presented a live demonstration on cutting-edge technologies in profiling (and what can we do against it). This film documents the three-day events organized at the Central European University, including the interactive installation “Warning! We are watching you!”
The film is available in the gallery.
On March 7, 2013 sixty leading European academics from disciplines such as Computer Science, Law, Economics and Business Administration made their position public, in which they express their support for the ongoing data protection reform in the EU, and refute the counter-arguments by the data processing industry. Since the publicizing of the position new members of the academic elite have joined the professional arguments and signed the position. The text of joint position can be read at www.dataprotectioneu.eu
Tags: -
NICTA's Network Research Group is looking for researchers and post-docs with experience in privacy enhancing technologies, in line with its current work in this area http://www.nicta.com.au/research/projects/trusted_networking/ . Successful candidates will contribute to the research activity at NICTA.
NICTA (National ICT Australia Ltd) is Australia's Information and Communications Technology Research Centre of Excellence. Its primary goal is to build and deliver excellence in ICT research and commercial outcomes for Australia. Since NICTA was founded in 2002, it has created five startups, developed a substantial technology and intellectual property portfolio, and continues to supply new talents to the ICT industry through a NICTA-supported PhD program. With 5 laboratories around Australia and over 700 staff, NICTA is the largest organisation in Australia dedicated to ICT research.
Accountability and Associated Responsibilities:
Essential Requirements:
Salary and conditions are commensurate with experience. The positions are based at the ATP Laboratory in Sydney, Australia. Applications will be considered as they arrive and the start date is negotiable.
For further information please contact Roksana Boreli roksana.boreli(at)nicta.com.au or Arik Friedman, arik.friedman(at)nicta.com.au .
Trilateral Research & Consulting, a London-based consultancy, specialising in research and the provision of strategic, policy and regulatory advice on new technologies is seeking to engage a Senior Research Analyst. The candidate will be expected to work on Trilateral projects in both the public and private sectors.
Key topics within these projects will include issues of privacy, trust, surveillance, risk and security as they pertain to cutting-edge innovative developments in ICTs and related technologies.
More about the position can be read here, or write to info@trilateralresearch.com
The European Commission's joint research centre (JRC) in Italy is looking for two "Category 30" grantholders to work in the area of privacy, in a small multinational research team. The posts will be for between 1 to 3 years, the conditions of employment are available here.
JRC needs people who have recently completed a Post Doctorate course of study - or have 5 years of research experience. The salary seems to be around €50k gross, which comes down to perhaps €2k6 (14 times per year).
See the link for all JRC applications.
The two individual posts on privacy-related studies are:
1) 24/02/2012 - IPSC (Ispra) - Post-doc researcher (Cat.30): Digital Identities, Authentications and Signatures CALL REFERENCE NO.: 2011-IPSC-30-1
2) 27/02/2012 - IPSC (Ispra) - Post-doc researcher (Cat.30): Privacy and Security in a smart digital world CALL REFERENCE NO.: 2012-IPSC-30-2
Applications may be sumitted through this digital portal.
A group of students and their supervisors at the Computer Security and Industrial Cryptography research group of Katholieke Universiteit Leuven, Belgium, recently released For Human Eyes Only (FHEO), a privacy-related Firefox extension that converts online messages into images in a way that makes it hard for computers extract messages, while genuine humans can still read it.
In order to evaluate its usability, the research group set up an online survey. Readers are kindly asked to participate and widely disseminate the link.
Completing the survey takes about 10 minutes, and it is important that participants do not get distracted with other things while completing it. This is because the answers are timed, and the researchers don't want that you appear to be slow...
Tags: -
The SPION Project has finalized and made publicly available a deliverable that provides an overview of the state of the art in research on privacy in social networks from different perspectives (technical, legal, sociological, etc), and points to challenges and research gaps.
Abstract:
The objective of this deliverable is to (1) provide an overview of existing literature and case descriptions of social and community uses of online Social Network Services (SNS); (2) summary of available educational solutions and empirical evidence of the efficiency and efficacy and the satisfaction they generate; analysis of legal frameworks applicable to SNS; (3) a review of confidentiality, access control and information flow, as well as feedback and awareness solutions. The Deliverable also includes an analysis of how the gaps and challenges in the different disciplines represented in the project are interrelated, mapping out research gaps and potentials for future interdisciplinary research on privacy and security in online Social Network Services.
The document can be downloaded from here.
The Cryptography, Security, and Privacy (CrySP) research group at the University of Waterloo is seeking applications for a postdoctoral research position in the field of privacy-enhancing technologies, preferably on the topic of privacy-preserving communications systems. This position will be held in the Cheriton School of Computer Science.
Applicants must hold a PhD in a related field, and should have a proven research record, as demonstrated by publications in top security and privacy venues (such as Oakland, CCS, USENIX Security, and NDSS) and/or top venues specific to privacy-enhancing technologies (such as PETS).
The start date of the position is negotiable. The position may be for one or two years.
Applicants should submit a CV, a research plan, two or three selected papers, and the names and contact information of three references.
For further information about the position, or to apply, please send email to Ian Goldberg <iang@cs.uwaterloo.ca> with "Postdoctoral position" in the subject line. Applications may be considered as they arrive.
For more information about the CrySP group or the Cheriton School of Computer Science, see http://crysp.uwaterloo.ca/ and http://www.cs.uwaterloo.ca/ respectively.
Profiling – collecting information about somebody from various sources, possibly by multiple means – may be a major concern for users of the internet, even more so since the beginning of the Web 2.0 era, which has made publicly disclosing personal information a kind of a social norm. This effect has made it easier for a profiler to look for information about someone, because it is no longer necessary to use sophisticated tracking techniques like Evercookies; it is likely that one can fill oneself in about somebody just be consulting the publicly available Web 2.0 services, such as blogs and social networking websites. In addition, several Web 2.0 service providers prohibit the obfuscation of the disclosed information in their Terms of Use, which means that the techniques of defence that rely solely on encryption are not applicable for all services.
The Center for Advanced Security Research Darmstadt (CASED) at the Technische Universität Darmstadt has >20 openings for its PhD scholarship program in several areas of IT Security.
Application deadline is August 15th, 2011.
One of the areas is PRIVACY AND TRUST, which comprises informational self-determination, privacy, data protection, privacy-enhancing and transparency-enhancing technologies, trust and trust management, reputation and recommendation systems, from a technical and legal perspective. And more, if you can think of more ...
For more information please check https://scholarship.cased.de.
Email: sek@sit.cased.de
Roger Clarke of Xamax Consultancy Pty Ltd, the distinguished expert on data surveillance and privacy, is asking "constructively negative comments" on his draft paper (available at http://www.rogerclarke.com/II/BrowserID‑1107.html) in which he summarizes his reactions to Mozilla's BrowserID Proposal. The BrowserID initiative has been greeted with enthusiasm by some commentators, however, according to Roger Clarke's analysis, the design of BrowserID is seriously threatening to individual freedoms, and the scheme should be avoided by consumers and by service-providers interested in serving consumers' needs.
Social networks and other Web 2.0 sites are becoming more and more a part of our culture; however, we are inclined to forget about – or at least ignore – their dangers. Many of us have heard or read stories where somebody was fired because he/she friended his/her boss on Facebook, and the latter found a malignant post about the company they worked for. However, it is good to know that we are in danger even if we do not make so obvious blunders. This article describes the threats we face, and the means of defence against them. One particular defence mechanism, BlogCrypt, is described in details: a simple Firefox plugin that allows encrypting and decrypting web content as easy as it sounds.
We will publish this article on the International PET Portal & Blog the following Friday.
Abstract
Voluntary disclosure of personal information is becoming more and more widespread with the advent of Web 2.0 services. Publishing such information constitutes new kinds of threats, such as further reinforcing already existing profiling techniques through correlation of perceived user activities to those publicly disclosed, but the most obvious of all is the intrinsic threat that malicious third parties collect and combine information we publish about ourselves. In this paper, we evaluate currently existing solutions that are destined for addressing this issue, then propose a model of our own for providing access control for a user over information she published and analyse our implementation thereof.
The French National Assembly has just voted for a proposition which can have a huge impact on the personal rights of French citizens. The bill, which mandates the inclusion of a chip for storing – among other things – the photo and the fingerprint of the possessor, in identity cards, was passed with 7 yeses to 4 noes. The newly born law also states that an optional secondary chip could also be included in the identity card for the purpose of facilitating business transactions. But the biggest impact on the life of French citizens is the creation of a centralised database – referred to as the ‘database of honest people’ by the proposing delegate – that will contain the name, gender, date and place of birth, address, size and colour of the eyes, fingerprint and photograph of 45 million people.
The EU FP7 privacy projects PrimeLife and PICOS will have their public closing events at IFIP SEC 2011, June 7-9, 2011, Luzern.
Moreover, there will be two Privacy/PET - related keynotes at IFIP SEC 2011 given by Ann Cavoukian (IFIP TC11 Kristian Beckman Awardee 2011) and by Michael Waidner.
Call for Participation: www.sec2011.org
The Fraunhofer Institute for Systems and Innovation Research
(http://www.isi.fraunhofer.de<http://www.isi.fraunhofer.de>) is looking for a
Social science or economics graduate (male or female)
for its "New Technologies" competence centre to commence as soon as possible. He/ she will work, inter alia, in the EC-funded project “Supporting fundamental rights, prIvacy and ethics in surveillance technologies” (SAPIENT). In the context of this study, he/she will have the opportunity to write a PhD thesis on privacy and data protection-related impacts of smart surveillance technologies.
He/she will work on further projects aiming to analyse scientific-technical, economic, social and political aspects of the development and use of Information and Communication Technologies, also at the interface with nanotechnologies, life sciences, and environmental and energy technologies. These analyses aim to provide scientific strategic advice to decision-makers in politics and industry.
Applicants should have successfully completed an appropriate degree in social science, economics or other surveillance and privacy-relevant disciplines (e.g., technology and culture, sociology, media studies). Very good knowledge of English and computer skills, as well as the willingness and ability to work in interdisciplinary teams and to acquire a good command of German are pre-conditions.
You will find interesting projects in the areas of policy, industry and technology, a communicative and friendly team of colleagues, and an excellent technical and administrative support infrastructure. Employment terms, remuneration and social security benefits are in accordance with the TVöD (collective wage agreement for German public sector employees).
The employment contract is limited to one year at first but can be extended for three more years.
Closing date: Mach 4, 2011
Reference No. ISI – 2011-9
Contact for your application:
Fraunhofer Institut für System- und Innovationsforschung
Gudrun Krenický
Breslauer Straße 48
76139 Karlsruhe
g.krenicky@isi.fraunhofer.de
Opening for a Post-Doctoral Researcher: Privacy in Pervasive Communications
EPFL (Ecole Polytechnique Federale de Lausanne), Switzerland
School of Computer and Communication Sciences
Mission: Contribute to the research efforts of the group, involving many interactions with PhD students, senior researchers, faculty members and external partners (industry or academia); some participation in teaching is also expected.
The research activities will mainly revolve around the design and the validation of protocols and algorithms to protect privacy in upcoming wireless networks, with an emphasis on cooperation aspects.
Working language: English (the knowledge of French is not required).
Starting date: to be agreed upon, possibly in summer or fall 2011.
Application date: preferably before March 1st, 2011
Complete text of the opening description:
http://lca.epfl.ch/information/jph/PostDocOpening.pdf
Contact person: Prof. Jean-Pierre Hubaux
http://people.epfl.ch/jean-pierre.hubaux
The School of Informatics and Computing at Indiana University seeks a highly qualified postdoctoral scholar/researcher to conduct research in Human Factors issues associated with preserving privacy in information technology.
The successful candidate will join an interdisciplinary team of researchers investigating human centered aspects of privacy in computing. The purpose of this project is to investigate people’s perceptions of privacy including individual understanding of how personal information is collected, accessed, and what contextual variables influence self disclosure online.
Applicants should possess a PhD in Engineering Psychology, Human Factors, Human Centered Computing, HCI, Computer Science, Cognitive Science, Media Studies, Communication or a related field with a strong background in usable security and privacy, research methods (e.g., experimental design), statistics, and/or mobile application development. Applicants should have experience conducting independent research, a record of communicating research results via publications and presentations, and be willing to participate in collaborative, interdisciplinary research while at residence at Indiana University.
The position is full time, in residence at Indiana University in Bloomington, Indiana, with a salary of $40,000 $50,000 plus generous benefits and funding for project travel. Preferred start date is June 1st, 2011. The position is for 1 year and will be jointly supervised by Dr. Kelly Caine and Dr. Apu Kapadia. Review of applications will begin early March 2011 and will continue until the position is filled.
The IU Bloomington School of Informatics and Computing is the first of its kind and among the largest in the country, with a faculty of more than 60 full time members, and more than 400 graduate students. The School has received public recognition as a “top ten program to watch” (Computerworld) thanks to its excellence and leadership in academic programs, interdisciplinary research, placement, and outreach. Located in the wooded, rolling hills of southern Indiana, Bloomington is a culturally thriving college town with a moderate cost of living. It is renowned for its top ranked music school, performing and fine arts, historic campus, cycling traditions, active lifestyle, and natural beauty.
When applying, please submit a cover letter, CV, and a relevant sample of published or submitted work. All application materials should be sent either via email or by mailing a hard copy:
Kelly Caine
Principal Research Scientist
School of Informatics and Computing
Indiana University
caine@indiana.edu
Mailing address:
Kelly Caine, Informatics West, 901 E. 10th Street, Bloomington, IN 47408
Indiana University is an Equal Opportunity/Affirmative Action employer. Applications from women and minorities are strongly encouraged.
DON’T LET THEM KNOW ALL ABOUT YOU!
Screening, exhibition and discussion with the creators
about the values of privacy
in the Toldi Cinema, Budapest
on 28 January 2011, Friday, at 17:30
Short films created in the framework of the BROAD project:
Ádám Horgas: Nail Polish
Gábor Rohonyi: Heavy Birthday
Zoltán Gergely: Flower Power
Péter Fazakas: Dreamguy
Micah Laaker/ACLU: Pizza ordering
Creative works submitted to the call of the BROAD project:
works by Márton Csatlós, Hunor Csörgits, Zsuzsanna Fehér, Andor Tamás Gellért, Melinda Kádár, Bernadette Kémenes, Csaba Kocsis, Ildikó Köllő, Tamás Németh, Zoltán Noska and Partikat.
*
Free entrance. The language of the discussion is Hungarian.
The exhibition is open from 21 January to 4 February 2011
Further information: www.pet-portal.eu/gallery
This is the updated version of the draft paper "What Do IT Professionals Think About Surveillance?" The draft paper can be downloaded from here directly."
A Ph.D. position is available within the Computer Security and Industrial Cryptography (COSIC) group at the Department of Electrical Engineering, Katholieke Universiteit Leuven, Belgium. COSIC is a top international research group in the areas of cryptography, computer security and privacy. With more than 60 researchers from 20 different nationalities it is one of the largest security groups world wide. The Ph.D. candidate will be working under the supervision of Prof. Claudia Diaz and Prof. Bart Preneel on the topic of location privacy.
Location privacy has become a major concern due to increasingly widespread practices that involve the collection and analysis of location data. Smart phones connecting to the Internet, smart vehicles commu- nicating with each other, or electronic transport cards all reveal potentially sensitive location information. These data could be easily abused to infringe individuals’ privacy through surveillance and profiling. The goal of location privacy technologies is to enable location based services and applications while preventing the disclosure of private location data.
The research will be focused on the formalization of location privacy properties, the design and analysis of location privacy technologies with a focus on preventing traffic analysis attacks, and the study of inter- disciplinary aspects of location privacy (part of the work will be done in collaboration with researchers with a background in law and ethics, so the candidate should have a broad interest in the subject beyond the purely technical aspects).
The candidate will be expected to conduct high-quality research, publish in top conferences and journals, and obtain her/his Ph.D. degree in four years.
Profile
• Master degree in Computer Science / Engineering
• Very good academic record
• Good to excellent knowledge of English both oral and written, good writing skills
• Self-motivated, resourceful and creative problem solver
• Having a background in information security is desirable
• Having publications is desirable
Salary and benefits
• Starting date: as soon as possible (the vacancy will stay open until a suitable candidate is found)
• Duration scholarship: 4 years
• Salary: 1.750 euros/month net (health insurance is also provided)
Application
Your application must contain: a cover letter introducing yourself and providing a strong motivation for doing scholarly research in the field of location privacy; your CV; copies of your degrees; information about your education including a list of attended courses with grades and dates; reference letters or contact addresses of reference persons; a brief summary of your master’s thesis; and if applicable copies of publications or proofs of other achievements. In addition, the application must include an outline of ideas and perspectives that the applicant wishes to develop in his/her PhD project (no more than one page).
Contact: claudia.diaz [at] esat.kuleuven.be
Tags: -
Kancellar.hu and Eötvös Loránd University will jointly organize for the fourth time the 24-hour hacker contest. This year the contest will be international so two-member student teams are welcome to attend from all over the world. There is a single condition: teams must attend the contest in person at 2 p.m. December 3, 2010 in the Southern Building at the Lágymányos Campus of the Eötvös Loránd University. Once again the prize is HUF 1,000,000 or approximately EUR 3,700!