If you encounter the error "Cannot truncate a table referenced in a foreign key constraint", temporality turn off the foreign keys check like the following:
SET FOREIGN_KEY_CHECKS=0;
TRUNCATE table;
SET FOREIGN_KEY_CHECKS=1;
Thursday, March 28, 2013
Wednesday, March 6, 2013
MySQL - how to see the bit(1) value
Say you have a column active bit(1).
In MySQL cmd, type
In MySQL cmd, type
select active+0 from tableYou should be able to see it.
Tuesday, March 5, 2013
SpringMVC - SEVERE: Error configuring application listener of class org.springframework.web.context.ContextLoaderListener
Make sure you have added the following if you are using Maven
< dependency>
< groupId>org.springframework< /groupId>
< artifactId>spring-web< /artifactId>
< version>3.2.1.RELEASE< /version>
< /dependency>
If not, make sure the spring-web jar is in your lib folder and also in the build path (Right click on your project -> properties -> Java Build Path -> Add JARs)
Monday, March 4, 2013
Nginx client intended to send too large body
If you see the message "client intended to send too large body" in the nginx log file (/var/log/nginx/error.log), you need to set the size of client_max_body_size.
You can set client_max_body_size in the context of http, server, location.
For example, in your .conf file:
server {
listen 80;
server_name domain.com;
access_log /vol/logs/nginx/web_portal.access.log;
location / {
if ($http_x_forwarded_proto != 'https') {
rewrite ^ https://$host$request_uri? permanent;
}
proxy_pass http://domain.com
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_intercept_errors on;
error_page 502 503 504 =200 http://domain.com;
client_max_body_size 650M;
}
}
You can set client_max_body_size in the context of http, server, location.
For example, in your .conf file:
server {
listen 80;
server_name domain.com;
access_log /vol/logs/nginx/web_portal.access.log;
location / {
if ($http_x_forwarded_proto != 'https') {
rewrite ^ https://$host$request_uri? permanent;
}
proxy_pass http://domain.com
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_intercept_errors on;
error_page 502 503 504 =200 http://domain.com;
client_max_body_size 650M;
}
}
Tuesday, February 26, 2013
Git Remove a file from being tracked but not from source
Sometimes, you don't want to commit files like .project, .classpath, .DS_store. You may have already added all these in .gitignore but those files were previously commited.
You can do
You can do
git rm --cached {filename}Your file would remain in your source directory.
Monday, February 25, 2013
AWS Java - Securing S3 content using query string authentication
Amazon S3 is a highly available and durable hosting environment that can let you serve websites, images, and large files. Sometimes, you may want to secure your contents so only you or your authenticated users can access them. This becomes more important when it's pay content.
This post is about using query string authentication to make the content to be available for a specified period of time.
Specs:
Assuming that you either have read the post above or you have implemented upload, upload a file to your Amazon S3 account.
In the AWS Management console, set the file's ACL permissions to your administrative account only (By default, it should be already if you didn't programmatically changed the ACL permission).
We will implement the following function called getS3Url().
We have set the expiration date to be one hour later. You would see the following expiration message an hour later.
This post is about using query string authentication to make the content to be available for a specified period of time.
Specs:
- Java 1.7
- Eclipse Juno
Before you begin, make sure you have all the AWS Eclipse tools ready. Read Using Java AWS SDK to upload files to Amazon S3 for how to install the AWS SDK tool and a basic guide on how to upload, delete and retrieve files on S3.
Signing the request will require the following structure:
Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature; Signature = Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) ); StringToSign = HTTP-Verb + "\n" + Content-MD5 + "\n" + Content-Type + "\n" + Date + "\n" + CanonicalizedAmzHeaders + CanonicalizedResource; CanonicalizedResource = [ "/" + Bucket ] ++ [ sub-resource, if present. For example "?acl", "?location", "?logging", or "?torrent"]; CanonicalizedAmzHeaders =
Assuming that you either have read the post above or you have implemented upload, upload a file to your Amazon S3 account.
In the AWS Management console, set the file's ACL permissions to your administrative account only (By default, it should be already if you didn't programmatically changed the ACL permission).
We will implement the following function called getS3Url().
We have set the expiration date to be one hour later. You would see the following expiration message an hour later.
< Error>
< Code>AccessDenied</ Code>
< Message>Access Denied</ Message>
< RequestId>8ECB67C2458CE483</ RequestId>
< HostId>
vL6wXNOkvYlpHXbvvlG1SGhy3q/+Ocb3guXtyaDZjmEu24Z4XQpwjfmNAvM+SViz
</ HostId>
</ Error>
Thursday, February 21, 2013
Using Java AWS SDK to upload files to Amazon S3
Amazon S3 is a highly available and durable storage suitable for storage large files that do not change frequently. This post will focus on how to upload files programmatically via the Java Amazon SDK. For an introduction to S3, read What is Amazon Simple Storage Service (Amazon S3)?
My specs:
Install AWS Toolkit
In eclipse, click on help in the menu bar and then "Install New Software".
In the "Work with:" input box, put " http://aws.amazon.com/eclipse" and Click Add...
Check on the AWS Toolkit for Eclipse and click Yes to install all the tools.
In the Eclipse toolbar, you will see a red cube icon. Click on the down arrow next to this icon. Click Preference.
Fill in your Access Key ID and Secret Access Key. Give it an Account Name (Ex. use your email). You can find your keys in the Amazon Management Console (My Account/Console -> Security Credentials). Click on Apply and OK.
In the Eclipse menu bar, click on Window -> Preferences. Expand the AWS Toolkit. Right click on your key. Click Select Private Key File. Associate it with your private key. Click OK.
Click on the down arrow next to the Amazon cube icon. Select Show AWS Explorer View. You should be able to see the Amazon S3 service and all your related buckets (if you have any).
Download and Install the AWS SDK for Java
You can download it here. Click on the AWS SDK for Java button.
Extract the file. Code Samples are located in /samples.
If you are using Maven, you can add the AWS SDK as a dependency in the pom.xml file.
< dependency>
< groupId>com.amazonaws</ groupId>
< artifactId>aws-java-sdk</ artifactId>
< version>1.3.32</ version>
< /dependency>
Choose the version you want here.
Alternatively, you can just add it as a library (Right Click on the project -> Java Build Path -> Libraries -> Add External JARs).
Running the default AWS Sample Apps
We will begin by setting up a sample project that you can check out how S3 works.
Click on the down arrow next to the Amazon icon.
Select New AWS Java Project.
Give a Project name.
Select your account.
Select Amazon S3 Sample, Amazon S3 Transfer Progress Sample, and AWS Console Application. Click Next.
Expand the newly created project. Left click on the AwsConsoleApp.java. In the Eclipse menu bar, click on Run -> Run.
You should see output like the following:
If you run the S3Sample.java, you will get the following:
AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
String bucketName = "my-s3-bucket-" + UUID.randomUUID();
s3.createBucket(bucketName);
s3.deleteBucket(bucketName);
Exceptions
Whenever you call any of the AWS API, you should surround the calls with try and catch clauses like the following:
try{
// AWS requests here
If you interested in securing your S3 contents for your authenticated users only, check out AWS Java - Securing S3 content using query string authentication.
My specs:
- Eclipse Juno
- SpringMVC 3.1.x
- Maven 3.0.x
Install AWS Toolkit
In eclipse, click on help in the menu bar and then "Install New Software".
In the "Work with:" input box, put " http://aws.amazon.com/eclipse" and Click Add...
Check on the AWS Toolkit for Eclipse and click Yes to install all the tools.
In the Eclipse toolbar, you will see a red cube icon. Click on the down arrow next to this icon. Click Preference.
Fill in your Access Key ID and Secret Access Key. Give it an Account Name (Ex. use your email). You can find your keys in the Amazon Management Console (My Account/Console -> Security Credentials). Click on Apply and OK.
In the Eclipse menu bar, click on Window -> Preferences. Expand the AWS Toolkit. Right click on your key. Click Select Private Key File. Associate it with your private key. Click OK.
Click on the down arrow next to the Amazon cube icon. Select Show AWS Explorer View. You should be able to see the Amazon S3 service and all your related buckets (if you have any).
Download and Install the AWS SDK for Java
You can download it here. Click on the AWS SDK for Java button.
Extract the file. Code Samples are located in /samples.
If you are using Maven, you can add the AWS SDK as a dependency in the pom.xml file.
< dependency>
< groupId>com.amazonaws</ groupId>
< artifactId>aws-java-sdk</ artifactId>
< version>1.3.32</ version>
< /dependency>
Choose the version you want here.
Alternatively, you can just add it as a library (Right Click on the project -> Java Build Path -> Libraries -> Add External JARs).
Running the default AWS Sample Apps
We will begin by setting up a sample project that you can check out how S3 works.
Click on the down arrow next to the Amazon icon.
Select New AWS Java Project.
Give a Project name.
Select your account.
Select Amazon S3 Sample, Amazon S3 Transfer Progress Sample, and AWS Console Application. Click Next.
Expand the newly created project. Left click on the AwsConsoleApp.java. In the Eclipse menu bar, click on Run -> Run.
You should see output like the following:
===========================================
Welcome to the AWS Java SDK!
===========================================
You have access to 3 Availability Zones.
You have 14 Amazon EC2 instance(s) running.
You have 0 Amazon SimpleDB domain(s)containing a total of 0 items.
You have 8 Amazon S3 bucket(s), containing 71841 objects with a total size of 224551364 bytes.
If you run the S3Sample.java, you will get the following:
===========================================
Getting Started with Amazon S3
===========================================
Creating bucket my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1
Listing buckets
- my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1
Uploading a new object to S3 from a file
Downloading an object
Content-Type: text/plain
abcdefghijklmnopqrstuvwxyz
01234567890112345678901234
!@#$%^&*()-=[]{};':',.<>/?
01234567890112345678901234
abcdefghijklmnopqrstuvwxyz
Listing objects
- MyObjectKey (size = 135)
Deleting an object
Deleting bucket my-first-s3-bucket-39065c55-2ee5-413a-9de1-6814dbb253c1
Integrate the S3 SDK
To begin, you need to have the file AwsCredentials.properties at the root of you class path. You can just copy the one generated during the sample project to your project class path. Or you can just create one with the following content:
secretKey=
accessKey=
Create an authenticated S3 object:
AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
Objects in S3 are stored in the form of buckets. Each bucket is globally unique. You cannot create a bucket with a name that another user has created. Each bucket contains key and value pairs you can define in any ways you want.
Create a bucket:
s3.createBucket(bucketName);
For readability, I have skipped the exception handling, I will come back to it at the end. The name of the bucket must conform to all the DNS rules. I usually name them using my domain name.
Delete a bucket:
List all buckets:
for (Bucket bucket : s3.listBuckets()) {
System.out.println(" - " + bucket.getName());
}
Save an object in a bucket:
String key = "myObjectKey";
PutObjectRequest putObject = new PutObjectRequest(bucketName, key, myFile);
s3.putObject(putObject);
myFile is of class File above.
Delete an object:
s3.deleteObject(bucketName, key);
Get/Download an object:
String key = "myObjectKey";
GetObjectRequest getObject = new GetObjectRequest(bucketName, key);
S3Object object = s3.getObject(getObject);
List objects by prefix:
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
.withBucketName(bucketName)
.withPrefix("My"));
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
System.out.println(" - " + objectSummary.getKey() + " " +
"(size = " + objectSummary.getSize() + ")");
}
Uploading large files
Use TransferManager whenever possible. It makes use of S3 multipart uploads to achieve enhanced throughput, performance, and reliability. It uses multiple threads to upload multiple parts of a single upload at once.
AWSCredentials myCredentials = new BasicAWSCredentials(...);
TransferManager tx = new TransferManager(myCredentials);
Upload myUpload = tx.upload(myBucket, myFile.getName(), myFile);
while (myUpload.isDone() == false) {
System.out.println("Transfer: " + myUpload.getDescription());
System.out.println(" - State: " + myUpload.getState());
System.out.println(" - Progress: " + myUpload.getProgress().getBytesTransfered());
// Do work while we wait for our upload to complete...
Thread.sleep(500);
}
Whenever you call any of the AWS API, you should surround the calls with try and catch clauses like the following:
try{
// AWS requests here
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which means your request made it "
+ "to Amazon S3, but was rejected with an error response for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which means the client encountered "
+ "a serious internal problem while trying to communicate with S3, "
+ "such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
}
If you interested in securing your S3 contents for your authenticated users only, check out AWS Java - Securing S3 content using query string authentication.
Subscribe to:
Posts (Atom)