No search results with Fulltext and query length

Issue: I am using “Fulltext” search type and noticed that the standard search returns no results for any query 3 characters or less (advanced search returns results) This is a problem for my store because I actually carry several brands and products with 2 or 3 letters that people are searching for and don’t want to setup redirects or synonyms for every one.

Is there any way to fix this switching to like/combined (which do show results for queries < = 3 chars)? It is obvious the search is not even being performed, it is just immediately comes back saying no results. but I can’t seem to find in the code the condition where it is checking the query length and stopping the search for fulltext type.

 

Solution: MySQL’s default minimum number for characters for words in a fulltext search index is 4. To make it 3, add the following to your my.cnf and restart MySQL.

 ft_min_word_len=3 

Then rebuild the fulltext index on the ‘catalogsearch_fulltext’ table. You can do this with the following command:

 repair table catalogsearch_fulltext quick; 

Then rebuild your search index from System -> Cache Management in Magento Admin

…and you are good to go

Mixed Content Blocking Enabled for Firefox and Chrome

For the last few months, I’ve been working on the Mixed Content Blocker for Firefox and Chrome. Mixed Active Content is now blocked by default for Firefox and Chrome!

What is Mixed Content?

When a user visits a page served over HTTP, their connection is open for eavesdropping and man-in-the-middle (MITM) attacks. When a user visits a page served over HTTPS, their connection with the web server is authenticated and encrypted with SSL and hence safeguarded from eavesdroppers and MITM attacks.

However, if an HTTPS page includes HTTP content, the HTTP portion can be read or modified by attackers, even though the main page is served over HTTPS.  When an HTTPS page has HTTP content, we call that content “mixed”. The webpage that the user is visiting is only partially encrypted, since some of the content is retrieved unencrypted over HTTP.  The Mixed Content Blocker blocks certain HTTP requests on HTTPS pages.

What do I mean by “certain HTTP requests”?  Why wouldn’t the Mixed Content Blocker just block all HTTP requests?  To answer this question, I will first explain how the browser security community divides mixed content into two categories; Mixed Active Content and Mixed Passive Content.

Mixed Content Classifications

Mixed Passive Content (a.k.a. Mixed Display Content).

Mixed Passive Content is HTTP Content on an HTTPS website that cannot alter the Document Object Model (DOM) of the webpage.  More simply stated, the HTTP content has a limited effect on the HTTPS website.  For example, an attacker could replace an image served over HTTP with an inappropriate image or a misleading message to the user. However, the attacker would not have the ability to affect the rest of the webpage, only the section of the page where the image is loaded.

An attacker could infer information about the user’s browsing activities by watching which images are served to the user.  Since certain images may only appear on a specific webpage, a request for an image could tell the attacker what webpage the user is visiting. Moreover, the attacker can observe the HTTP headers sent with the image, including the user agent string and any cookies associated with the domain the image is served from.  If the image is served from the same domain as the main webpage, then the protection HTTPS provides to the user’s account becomes useless, since an attacker can read the user’s cookies from image request headers[1].
Examples of Passive Content are images, audio, and video loads.  Requests made by objects have also fallen into this category for now; the reasons for this are discussed further in the Appendix.

Mixed Active Content (a.k.a. Mixed Script Content)

Mixed Active Content is content that has access to and can affect all or parts of the Document Object Model (DOM) of an HTTPS page. This type of mixed content can alter the behavior of an HTTPS page and potentially steal sensitive data from the user. Hence, in addition to the risks already described for Mixed Passive Content above, Mixed Active Content is also exposesd to a number of additional attack vectors.

A MITM attacker can intercept requests for HTTP active content. The attacker can then re-write the response to include malicious JavaScript code. Malicious script can steal the user’s credentials, acquire sensitive data about the user, or attempt to install malware on the user’s system (by leveraging vulnerable plugins the user has installed, for example).
Examples of Active Content are JavaScript, CSS, objects, xhr requests, iframes, and fonts.

What will the Mixed Content Blocker block?

The Mixed Content Blocker will block Mixed Active Content requests for Firefox and Chrome.  This reduces the threat to the user, but does not eliminate it completely because Mixed Passive Content is still permitted.  Users can decide to block Mixed Passive Content as well by following a couple simple steps[2].

Why are we reducing the threat instead of eliminating the threat?  Unfortunately, the web is not ready for Firefox and Chrome to block Mixed Passive Content.  Mixed Passive Content is still common on the web.  For example, many HTTPS webpages include HTTP images.  Too many pages would break if we blocked Mixed Passive Content (ex: https://youtube.com).  Hence, Firefox and Chrome would alert users too often and contribute to security warning fatigue.
Moreover, blocking Mixed Passive Content could cause considerable user experience issues for users with low bandwidth connections.  To avoid generating a browser security warning, websites will begin removing Mixed Passive Content from their HTTPS sites by replacing HTTP images and videos with their HTTPS equivalent versions.  When low bandwidth users visit the HTTPS site, all image loads and video streams would be encrypted and there would be considerable lag in the page’s load time and the time it takes for videos to buffer.  With Mixed Active Content, bandwidth considerations are not as big of an issue since Mixed Active Content loads (ex: scripts, stylesheets) are usually a few KB, compared to Mixed Passive Content loads which often contain multiple MBs of data.
The risk involved with Mixed Content (active or passive) also depends on the type of website the user is visiting and how sensitive the data exposed to that site may be. The webpage may have public data visible to the world, or it may have private data that is only visible when authenticated. If an HTTP webpage is public and doesn’t have any sensitive data, the use of Mixed Content on that site still provides the attacker with the opportunity to redirect requests to other HTTP URLs and steal HTTP cookies from those sites.

Mixed Content Blocker UI

Designing UI for security is always tricky.  How do you inform the user about a potential security threat without annoying them and interrupting their task?

Larissa Co (@lyco1) from Mozilla’s User Experience team aimed to solve this problem.  She created a Security UX Framework with a set of core principles that drove the UX design for the Mixed Content Blocker.  If you’re interested in learning more about this process, I encourage you to check out the Mixed Content Design Specification and Larissa’s presentation on Designing Meaningful and Usable Security Experiences.

So what does the UI look like?  If a user visits an HTTPS page with Mixed Active Content, they will see the following in the location bar:

Shield Icon Doorhanger shown on HTTPS page with Mixed Active Content

Clicking on the shield, they will see options to Learn More, Keep Blocking, or Disable Protection on This Page:

Shield Doorhanger Drop Down UI

If a user decides to “Keep Blocking”, the notification in the location bar will disappear:

If the user decides to Keep Blocking, the shield will disappear.

On the other hand, if a user decides to “Disable Protection on This Page”, all mixed content will load on the HTTPS page and the Lock icon will be replaced with a Yellow Warning Triangle:

Yellow Warning Triangle appears after the user Disables Protection

If the user is unsure what to do, they can opt to learn more by clicking on the “Learn More” link. The user can also select “Not Now” or the “x” at the top of the drop down box to defer their decision until later.

If a user visits an HTTPS page with Mixed Passive Content, Firefox and Chrome will not block the passive content by default (see What will the Mixed Content Blocker block?).  But, since Mixed Passive Content does exist on the page, it is not fully encrypted and the user will not see the lock icon in the location bar:

A page with Mixed Passive Content will show the Globe icon instead of the Lock icon.Mixed Content Frames

Note that frames are classified as Mixed Active Content.  This has been a source of debate and browser vendors haven’t quite settled on whether mixed content frames should be considered active or passive.  Firefox, Chrome and Internet Explorer consider frames Mixed Active Content, while Chrome considers frames Mixed Passive Content.

When trying to determine whether a load is passive or active, I ask myself “can the content affect the DOM of the page?”.  With frames, this gets a little tricky.  Technically, an HTTP frame cannot affect the DOM of its HTTPS page and hence could fall into the Mixed Passive Content category.
When we dig further, however, we find reasons to push frames into the Mixed Active Content category.  A frame has the ability to navigate the top level page and redirect a user to a malicious site.  Frames can also trick users into disclosing sensitive information to attackers.  For example, assume a user is on an HTTPS page that embeds an HTTP frame.  An attacker can MITM the frame and replace its content with a form.  The form may ask the user to login or create an account. Most users are oblivious to the concept of framing pages and have no idea that it is the HTTP frame that contains the form and not the HTTPS website. Assuming they are on the HTTPS encrypted page, the user enters their personal information.  This information is then sent to the attacker without the user’s knowledge.

Remaining Edge Case

Many edge cases were found while developing the Mixed Content Blocker.  Some of these edge cases have been resolved, some are pending development, and some are open questions that require further discussion.

We did not want to wait until all possible issues were resolved before turning Mixed Active Content blocking on by default for our users.  But at the same time, if we turned the feature on with too many false positives, we would be unnecessarily alerting users and contributing to security warning fatigue.  (False positives are cases where the Mixed Content Blocker mistakenly blocks content that should have been permitted.)  Hence, I worked to eliminate all false positive issues that I was aware of before turning on the Mixed Content Blocker.
On the other hand, there are still a number of false negatives that remain open. This means that there are certain cases where the Mixed Content Blocker does not block content that should have been blocked.  We still decided to turn the feature on because we believe we should protect our users as soon as possible, even if our solution is not 100% perfect yet.  The false negatives are valid issues and affect the safety of our users.  Engineering solutions for these edge cases is important (and is next on my list), but should not prevent us from protecting users from mixed content we can identify and can block for users today.
For developers trying to secure their websites by removing mixed content, these false negative edge cases could prove problematic and cause extra work.  The last thing a developer wants to do is attempt to remove mixed content on their site for Firefox 23, and then have to do this again in Firefox 24 because of an edge case that was fixed and that the developer wasn’t aware of the first time around.  In an attempt to help with this problem, I have an added an Appendix to this blog post that will describe all the open edge cases and open questions with reference links where developers can learn more about the progress in resolving these known issues.

Footnotes

[1] Unless the authentication cookies are flagged with the secure bit, preventing the browser from sending the authentication cookies for non-HTTPS requests.

[2] To block Mixed Passive Content, open a window or tab in Firefox and enter about:config.  You will get to a page that asks you to promise to be careful.  Promise you will be, and then change the value of security.mixed_content.block_display_content to true by double clicking it.

[3]  for Firefox and Chrome+, Mixed Active Content is blocked by default.  If you are using a Firefox version between 18 and 22, you can block Mixed Active Content by opening a window or tab in Firefox and enter about:config.  You will get to a page that asks you to promise to be careful.  Promise you will be, and then change the value of security.mixed_content.block_active_content to true by double clicking it.

Appendix – Edge Cases Described in Detail

Note that this section is highly technical and has a lot of gory details, so feel free to skip over it unless you are interested, want a sneak peak at forthcoming Mixed Content Blocker changes that may affect your site, and/or are a browser security junkie like me :)

  1. RedirectsIf an HTTPS content load responds with a 302 to an HTTP destination, the Mixed Content Blocker in Firefox will not detect or block the mixed content.  This is because of the way that Gecko’s Content Policies work (or don’t work) with redirects.  The work to fix this edge case can be found in Bug 418354 and Bug 456957.
  2. Session Restore & document.write
    Assume an HTTPS page loads an HTTP script that invokes a document.write that replaces the current page’s content.  If the browser is shut down and later the session is restored, the user will see the content from the document.write that replaced the original webpage.  This would be okay, except that instead of showing the yellow warning triangle, Firefox 23 shows a lock.  This is inaccurate, because the page’s new content was created by an HTTP script and hence cannot be considered fully encrypted.  The work to fix this issue can be found in Bug 815345.
  3. Object Subrequests
    Assume that an HTTPS page loads an HTTPS object in a plugin.  That object may then request further resources through the plugin.  The requests made by the plugin are considered the object’s subrequests.  Since the requests are made by a plugin and not by the browser, it is very difficult for the browser to determine whether the HTTP subrequests should be considered Mixed Active or Mixed Passive.  Without help from plugin vendors, browsers cannot accurately determine this classification.  To prevent false positives and security warning fatigue, Firefox (and Chrome) have classified HTTP object subrequests as Mixed Passive Content.  This means that we do have false negatives, where the content is actually active and should be blocked, but isn’t.
    The solution to these false negatives is still under discussion.  Take a look at Bug 836352 and chime in if you have some suggestions!
  4. Relying on HSTS to prevent Mixed Content
    Websites can specify an HSTS header that tells browsers to only connect to them over a secure connection.  Assume https://example.com sets this header (and for simplicity sake, assume example.com is not on the HSTS preload list).  A developer, relying on HSTS, includes HTTP content from example.com on https://foo.com.
    Firefox will convert the http://example.com link to an https://example.com link before making the network request.  Hence, technically, the user’s security is never affected.
    Currently, the Mixed Content Blocker will detect the http://example.com link before it is converted to HTTPS by HSTS and classify the content as mixed content.  I believe this is fine.  Relying on HSTS to protect websites from mixed content loads is bad practice, for the following reasons.

    • If this is the first time the user has loaded content from example.com, the content will be loaded over HTTP since the browser has not yet received and HSTS header from example.com
    • For browsers that do not have HSTS implemented (ex: Internet Explorer), https://foo.com will have mixed content, since the request for content from http://example.com is never converted to an HTTPS request.

    Perhaps you disagree?  Express your thoughts in Bug 838395

  5. Mixed Content in Framed Pages
    Assume https://unimportant-site.com includes an iframe to https://bank.comhttps://bank.com contains Mixed Active Content that Firefox blocks.  The user has a choice to “Disable Protection on This Page” and load the Mixed Active  Content on https://bank.com.  As we mentioned earlier, most users don’t know what frames are.  The user see’s that they are on https://unimportant-site.com and can decide to load the mixed content on https://unimportant-site.com by clicking “Disable Protection on This Page”.  To the user, “This Page” is https://unimportant-site.com, but in actuality, the result is that protection is disabled on https://bank.com.
    Bug 826599 discusses whether users should even have an option to disable protection on HTTPS frames.  The bug is to remove the UI to Disable Protection if the mixed content is coming from an HTTPS frame with a different domain than the top level domain.  What do you think about this?

In addition to the items listed above, there are also many other issues remaining to improve the Mixed Content Blocker.  You can see here for a list of items and corresponding bug numbers. (collected)

Analysis of the JavaScript Module pattern: In-Depth

The module pattern is a common JavaScript coding pattern. It’s generally well understood, but there are a number of advanced uses that have not gotten a lot of attention. In this article, I’ll review the basics and cover some truly remarkable advanced topics, including one which I think is original.

The Basics

We’ll start out with a simple overview of the module pattern, which has been well-known since Eric Miraglia (of YUI) first blogged about it three years ago. If you’re already familiar with the module pattern, feel free to skip ahead to “Advanced Patterns”.

Anonymous Closures

This is the fundamental construct that makes it all possible, and really is the single best feature of JavaScript. We’ll simply create an anonymous function, and execute it immediately. All of the code that runs inside the function lives in a closure, which provides privacy and state throughout the lifetime of our application.

(function () {
	// ... all vars and functions are in this scope only
	// still maintains access to all globals
}());

Notice the () around the anonymous function. This is required by the language, since statements that begin with the token function are always considered to be function declarations. Including () creates a function expression instead.

Global Import

JavaScript has a feature known as implied globals. Whenever a name is used, the interpreter walks the scope chain backwards looking for a var statement for that name. If none is found, that variable is assumed to be global. If it’s used in an assignment, the global is created if it doesn’t already exist. This means that using or creating global variables in an anonymous closure is easy. Unfortunately, this leads to hard-to-manage code, as it’s not obvious (to humans) which variables are global in a given file.

Luckily, our anonymous function provides an easy alternative. By passing globals as parameters to our anonymous function, we import them into our code, which is both clearer and faster than implied globals. Here’s an example:

(function ($, YAHOO) {
	// now have access to globals jQuery (as $) and YAHOO in this code
}(jQuery, YAHOO));

Module Export

Sometimes you don’t just want to use globals, but you want to declare them. We can easily do this by exporting them, using the anonymous function’s return value. Doing so will complete the basic module pattern, so here’s a complete example:

var MODULE = (function () {
	var my = {},
		privateVariable = 1;

	function privateMethod() {
		// ...
	}

	my.moduleProperty = 1;
	my.moduleMethod = function () {
		// ...
	};

	return my;
}());

Notice that we’ve declared a global module named MODULE, with two public properties: a method named MODULE.moduleMethod and a variable named MODULE.moduleProperty. In addition, it maintains private internal state using the closure of the anonymous function. Also, we can easily import needed globals, using the pattern we learned above.

Advanced Patterns

While the above is enough for many uses, we can take this pattern farther and create some very powerful, extensible constructs. Lets work through them one-by-one, continuing with our module named MODULE.

Augmentation

One limitation of the module pattern so far is that the entire module must be in one file. Anyone who has worked in a large code-base understands the value of splitting among multiple files. Luckily, we have a nice solution to augment modules. First, we import the module, then we add properties, then we export it. Here’s an example, augmenting our MODULE from above:

var MODULE = (function (my) {
	my.anotherMethod = function () {
		// added method...
	};

	return my;
}(MODULE));

We use the var keyword again for consistency, even though it’s not necessary. After this code has run, our module will have gained a new public method named MODULE.anotherMethod. This augmentation file will also maintain its own private internal state and imports.

Loose Augmentation

While our example above requires our initial module creation to be first, and the augmentation to happen second, that isn’t always necessary. One of the best things a JavaScript application can do for performance is to load scripts asynchronously. We can create flexible multi-part modules that can load themselves in any order with loose augmentation. Each file should have the following structure:

var MODULE = (function (my) {
	// add capabilities...

	return my;
}(MODULE || {}));

In this pattern, the var statement is always necessary. Note that the import will create the module if it does not already exist. This means you can use a tool like LABjs and load all of your module files in parallel, without needing to block.

Tight Augmentation

While loose augmentation is great, it does place some limitations on your module. Most importantly, you cannot override module properties safely. You also cannot use module properties from other files during initialization (but you can at run-time after intialization). Tight augmentation implies a set loading order, but allows overrides. Here is a simple example (augmenting our original MODULE):

var MODULE = (function (my) {
	var old_moduleMethod = my.moduleMethod;

	my.moduleMethod = function () {
		// method override, has access to old through old_moduleMethod...
	};

	return my;
}(MODULE));

Here we’ve overridden MODULE.moduleMethod, but maintain a reference to the original method, if needed.

Cloning and Inheritance

var MODULE_TWO = (function (old) {
	var my = {},
		key;

	for (key in old) {
		if (old.hasOwnProperty(key)) {
			my[key] = old[key];
		}
	}

	var super_moduleMethod = old.moduleMethod;
	my.moduleMethod = function () {
		// override method on the clone, access to super through super_moduleMethod
	};

	return my;
}(MODULE));

This pattern is perhaps the least flexible option. It does allow some neat compositions, but that comes at the expense of flexibility. As I’ve written it, properties which are objects or functions will not be duplicated, they will exist as one object with two references. Changing one will change the other. This could be fixed for objects with a recursive cloning process, but probably cannot be fixed for functions, except perhaps with eval. Nevertheless, I’ve included it for completeness.

Cross-File Private State

One severe limitation of splitting a module across multiple files is that each file maintains its own private state, and does not get access to the private state of the other files. This can be fixed. Here is an example of a loosely augmented module that will maintain private state across all augmentations:

var MODULE = (function (my) {
	var _private = my._private = my._private || {},
		_seal = my._seal = my._seal || function () {
			delete my._private;
			delete my._seal;
			delete my._unseal;
		},
		_unseal = my._unseal = my._unseal || function () {
			my._private = _private;
			my._seal = _seal;
			my._unseal = _unseal;
		};

	// permanent access to _private, _seal, and _unseal

	return my;
}(MODULE || {}));

Any file can set properties on their local variable _private, and it will be immediately available to the others. Once this module has loaded completely, the application should call MODULE._seal(), which will prevent external access to the internal _private. If this module were to be augmented again, further in the application’s lifetime, one of the internal methods, in any file, can call _unseal() before loading the new file, and call _seal() again after it has been executed. This pattern occurred to me today while I was at work, I have not seen this elsewhere. I think this is a very useful pattern, and would have been worth writing about all on its own.

Sub-modules

Our final advanced pattern is actually the simplest. There are many good cases for creating sub-modules. It is just like creating regular modules:

MODULE.sub = (function () {
	var my = {};
	// ...

	return my;
}());

While this may have been obvious, I thought it worth including. Sub-modules have all the advanced capabilities of normal modules, including augmentation and private state.

Conclusions

Most of the advanced patterns can be combined with each other to create more useful patterns. If I had to advocate a route to take in designing a complex application, I’d combine loose augmentation, private state, and sub-modules.

I haven’t touched on performance here at all, but I’d like to put in one quick note: The module pattern is good for performance. It minifies really well, which makes downloading the code faster. Using loose augmentation allows easy non-blocking parallel downloads, which also speeds up download speeds. Initialization time is probably a bit slower than other methods, but worth the trade-off. Run-time performance should suffer no penalties so long as globals are imported correctly, and will probably gain speed in sub-modules by shortening the reference chain with local variables.

To close, here’s an example of a sub-module that loads itself dynamically to its parent (creating it if it does not exist). I’ve left out private state for brevity, but including it would be simple. This code pattern allows an entire complex heirarchical code-base to be loaded completely in parallel with itself, sub-modules and all.

var UTIL = (function (parent, $) {
	var my = parent.ajax = parent.ajax || {};

	my.get = function (url, params, callback) {
		// ok, so I'm cheating a bit🙂
		return $.getJSON(url, params, callback);
	};

	// etc...

	return parent;
}(UTIL || {}, jQuery));

I hope this has been useful, and please leave a comment to share your thoughts. Now, go forth and write better, more modular JavaScript!

This post was featured on Ajaxian.com, and there is a little bit more discussion going on there as well, which is worth reading in addition to the comments below.

Image Data URIs with PHP

If you troll page markup like me, you’ve no doubt seen the use of data URI’s within image src attributes. Instead of providing a traditional address to the image, the image file data is base64-encoded and stuffed within the src attribute. Doing so saves a network request for each image, and if you’re the paranoid type, can prevent exposure of directory paths. Since creating data URIs is incredibly easy, let me show you how to do it with PHP.

Start by reading in the image using file_get_contents (or any other PHP method you’d like), then convert the image to base64 using base64_encode:

// A few settings
$image = 'cricci.jpg';

// Read image path, convert to base64 encoding
$imageData = base64_encode(file_get_contents($image));

// Format the image SRC:  data:{mime};base64,{data};
$src = 'data: '.mime_content_type($image).';base64,'.$imageData;

// Output a sample image
echo '<img alt="" src="'. $src .'" />';

With the image data in base64 format, the last step is placing that data within the data URI format, including the image’s MIME type. This would make for a good function:

function getDataURI($image, $mime = '') {
  return 'data: '.(function_exists('mime_content_type') ? mime_content_type($image) : $mime).';base64,'.base64_encode(file_get_contents($image));
}

The thing to remember is that IE7 and lower don’t support data URIs, so keep that in mind if you’re considering switching from image paths to data URIs!

Load data to a grid from two different server calls/stores

Step 1:Adding records from Store 2 to Store 2.

var store2 = new Ext.data.Store({
  ...
});
var store1 = new Ext.data.Store({
  ...
  listeners: {
    load: function(store) {
      store2.addRecords({records: store.getRange()},{add: true});
    }
  }
});

Step 2: Using records from Store 2 with Store 1.
For example, the first data col comes from Store 1 and the data from Store 2 forms cols 2 and 3. You can use a renderer that finds the data in the second store if the ‘other’ columns are just ‘lookup’ data, e.g.:

var store1 = new Ext.data.Store({
	...,
	fields: ['field1', 'field2']
});
 
var store2 = new Ext.data.Store({
	...
	id: 'field2',
	fields: ['field2', 'fieldA', 'fieldB']
});
 
var renderA = function(value) {
	var rec = store2.getById(value);
	return rec ? rec.get('fieldA') : '';
}
var renderB = function(value) {
	var rec = store2.getById(value);
	return rec ? rec.get('fieldB') : '';
}
 
var columns = [
	{header: 'Field 1', dataIndex: 'field1'},
	{header: 'Field A', dataIndex: 'field2', renderer: renderA},
	{header: 'Field B', dataIndex: 'field2', renderer: renderB}
];

Write/Read data to .plist file.

I was struggling for a very long time with this feature and I was so angry that nobody put a complete code – just simple lines. I’ll hope this will be helpfull.

Why using plists? Becouse it’s simple, effective and comparing to NSUserDefaults very fast.

The main reason that many people failed with this, was the fact, they are trying to save data in Bundle Directory. It can’t be done, this place is protected and be a completed “one thing”. So the best option is to save your data into documents directory.

First of all add a plist to your project in Xcode. For example “data.plist”.

Next, look at this code which creates path to plist in documents directory:

NSError *error;
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); //1
NSString *documentsDirectory = [paths objectAtIndex:0]; //2
NSString *path = [documentsDirectory stringByAppendingPathComponent:@"data.plist"]; //3

NSFileManager *fileManager = [NSFileManager defaultManager];

if (![fileManager fileExistsAtPath: path]) //4
{
	NSString *bundle = [[NSBundle mainBundle] pathForResource:@"data" ofType:@"plist"]; //5

	[fileManager copyItemAtPath:bundle toPath: path error:&error]; //6
}

1) Create a list of paths.
2) Get a path to your documents directory from the list.
3) Create a full file path.
4) Check if file exists.
5) Get a path to your plist created before in bundle directory (by Xcode).
6) Copy this plist to your documents directory.

Ok, next read data:

NSMutableDictionary *savedStock = [[NSMutableDictionary alloc] initWithContentsOfFile: path];

//load from savedStock example int value
int value;
value = [[savedStock objectForKey:@"value"] intValue];

[savedStock release];

And write data:

NSMutableDictionary *data = [[NSMutableDictionary alloc] initWithContentsOfFile: path];

//here add elements to data file and write data to file
int value = 5;

[data setObject:[NSNumber numberWithInt:value] forKey:@"value"];

[data writeToFile: path atomically:YES];
[data release]

Remember about two things:

1) You must create a plist file in your Xcode project.
2) To optimize your app, better is to save all the data when application (or for example view) is closing. For instance in applicationWillTerminate. But if you are storing reeaaaally big data, sometimes it couldn’t be saved in this method, becouse the app is closing too long and the system will terminate it immediately.

I hope it will help you