Apache doesn’t start after Vagrant reload

I'm trying to set up a simple dev environment with Vagrant. The base box (that I created) has CentOS 6.5 64bit with Apache and MySQL.

The issue is, the httpd service doesn't start on boot after I reload the VM (vagrant reload or vagrant halt then up).

The problem only occurs when I run a provision script that alters the DocumentRoot and only after the first time I halt the machine.

More info:

httpd is on chkconfig on levels 2, 3, 4 and 5

There are no errors written to the error_log (on /etc/httpd/logs).

If I ssh into the machine and start the service manually, it starts with no problem.

I had the same issue with other CentOS boxes (like the chef/centos-6.5 available on vagrantcloud.com), that's why I created one myself.

Other services, like mysql, start fine, so it's a problem specific to apache.

Resuming:

  • httpd always start on first boot, even with the provision script (like after vagrant destroy)
  • httpd always start when I don't run a provision script (but I need it to set the DocumentRoot)
  • httpd doesn't start after first halt, with a provision script that messes with DocumentRoot (not sure if that's the problem).

This is my Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.box = "centos64_lamp"
  config.vm.box_url = "<url>/centos64_lamp.box"
  config.vm.hostname = "machine.dev"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.synced_folder ".", "/vagrant", owner: "root", group: "root"
  config.vm.provision :shell, :path => "vagrant_files/bootstrap.sh"

end

I tried to create the vagrant folder with owner/group root and apache. Same problem with both (as with owner vagrant).

These are the provision scripts (bootstrap.sh) that I tried. The only thing that I want them to do is to change the DocumentRoot to the vagrant folder. Neither worked.

Try 1

#!/usr/bin/env bash

sudo rm -rf /var/www/html
sudo ln -fs /vagrant/app/webroot /var/www/html

Try 2

#!/usr/bin/env bash

sudo cp /vagrant/vagrant_files/httpd.conf /etc/httpd/conf
sudo service httpd restart

The httpd.conf on the second try is equal to the default one, except for the DocumentRoot path. This second alternative allows me to do vagrant up --provision to force the restart of the service, but that should be an unnecessary step.

What else can I try to solve this? Thank you.

Setting Access-Control-Allow-Origin header on source server

I am using $.get to parse an RSS feed in jQuery with code similar to this:

$.get(rssurl, function(data) {
    var $xml = $(data);
    $xml.find("item").each(function() {
        var $this = $(this),
            item = {
                title: $this.find("title").text(),
                link: $this.find("link").text(),
                description: $this.find("description").text(),
                pubDate: $this.find("pubDate").text(),
                author: $this.find("author").text()
        }
        //Do something with item here...
    });
});

However, due to the Single Origin Policy, I'm getting the following error:

No 'Access-Control-Allow-Origin' header is present on the requested resource.

Fortunately I have access to the source server, as this is my own dynamically created RSS feed.

My question is: how do I set the Access-Control-Allow-Origin header on my source server?

Edit

I'm using PHP and I think my webserver is Apache.

Upgrading Apache on Windows Server 2008 R2

I am currently running Apache 2.2 on Windows Server 2008 R2. I would like to upgrade Apache to version 2.4.9 so that I can run mod_security2. I installed Apache 2.2 via an installer. I don't, however, see an installer for Apache v2.4.9. I can find the full Apache Windows binaries on ApacheLounge. I'm just not clear on how to seamlessly upgrade to version 2.4.9. Here's what I'm thinking of doing:

  • download version 2.4.9 Windows binaries from ApacheLounge
  • stop Apache 2.2 service
  • rename C:\Program Files (x86)\Apache Software Foundation\Apache2.2 to Apache2.2_OLD
  • create Apache2.4 folder at C:\Program Files (x86)\Apache Software Foundation\
  • copy contents of downloaded Windows binaries to Apache2.4 folder
  • set up httpd.exe within the Apache2.4\bin folder as a Windows service
  • copy httpd.conf file from Apache2.2\conf over to Apache2.4\conf
  • modify httpd.conf file to point to new Apache installation location
  • copy any modules from Apache\modules that are missing (and needed) from Apache2.4\modules
  • start Apache2.4 Windows service

Am I over-simplifying? What steps are missing? I'm relatively new to Apache, so it's possible I'm overlooking the obvious.

Installing a Spark Cluster, problems with Hive

I am trying to get a Spark/Shark cluster up but keep running into the same problem. I have followed the instructions on https://github.com/amplab/shark/wiki/Running-Shark-on-a-Cluster and addressed Hive as stated.

I think that the Shark Driver is picking up another version of Hadoop jars but am unsure why.

Here are the details, any help would be great.

Spark/Shark 0.9.0

Apache Hadoop 2.3.0

Amplabs Hive 0.11

Scala 2.10.3

Java 7

I have everything install but I get some deprecation warnings and then an exception:

14/03/14 11:24:47 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive

14/03/14 11:24:47 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize

Exception:

Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1072)
    at shark.memstore2.TableRecovery$.reloadRdds(TableRecovery.scala:49)
    at shark.SharkCliDriver.<init>(SharkCliDriver.scala:275)
    at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
    at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2288)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2299)
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1070)
    ... 4 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
    ... 9 more
Caused by: java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation

Installing a Spark Cluster, problems with Hive

I am trying to get a Spark/Shark cluster up but keep running into the same problem. I have followed the instructions on https://github.com/amplab/shark/wiki/Running-Shark-on-a-Cluster and addressed Hive as stated.

I think that the Shark Driver is picking up another version of Hadoop jars but am unsure why.

Here are the details, any help would be great.

Spark/Shark 0.9.0

Apache Hadoop 2.3.0

Amplabs Hive 0.11

Scala 2.10.3

Java 7

I have everything install but I get some deprecation warnings and then an exception:

14/03/14 11:24:47 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive

14/03/14 11:24:47 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize

Exception:

Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1072)
    at shark.memstore2.TableRecovery$.reloadRdds(TableRecovery.scala:49)
    at shark.SharkCliDriver.<init>(SharkCliDriver.scala:275)
    at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
    at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2288)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2299)
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1070)
    ... 4 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
    ... 9 more
Caused by: java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation

how reliable is $_SERVER["REMOTE_PORT"] in determining user device?

I am creating an anonymous online poll, I can eliminate some duplicated votes by using browser fingerprint. but I still worry about what if a user changes his browser and votes again. So I am trying to find out an effective device fingerprint to solve that problem. obviously ip is not an option, because my targeted users might be at school sharing the same ip with classmates or live in an apartment sharing ip with room mates.

I was experimenting with $_SERVER["REMOTE_PORT"] and discovered that $_SERVER["REMOTE_PORT"] would stay in a relatively consist range on the same device no matter what browser I'm using and it is always increasing. For example, on Mac 1, my port is in the range of (58100,58200) during an interval of 10 minutes no matter what browser i'm using, similarly on Mac 2, the range stays in (49200,49300) for about 10 minutes no matter what browser I'm using. I also tested it on iphone and the range for that is (50100,50200). so I wonder if using $_SERVER["REMOTE_PORT"] together with fingerprint could prevent duplicated votes in a short period of time from the same person on the same device? I also want to mention that all above experimenting were done in a local network. so do you have any better solutions? or you think this could work in a production server?

Execute PHP script in cron job

In our centos6 server. I would like to execute a php script in cron job as apache user but unfortunately it does not work.

here is the edition of crontab (crontab -uapache -e)

24 17 * * * php /opt/test.php

and here is the source code of "test.php" file which works fine with "apache" user as owner.

<?php exec( 'touch /opt/test/test.txt');?>

I try to replace php with full path of php (/usr/local/php/bin/php) but also it doesn't work

Thanks in advance, Please Help me

Correct Apache var/www permissions

I'm new to writing permissions in Apache. This is a shared sever of which I have one account on it.

I was having an issue with FileZilla not being able to write to my var/www directory and in an attempt to change its permissions I think I have made it worse.

Here are my current settings

$ ls -l
total 40
drwxr-xr-x  2 root root  4096 Feb  5  2013 backups
drwxr-xr-x  7 root root  4096 Jul 30  2013 cache
drwxr-xr-x 26 root root  4096 Jul 30  2013 lib
drwxrwsr-x  2 root staff 4096 Apr 15  2008 local
lrwxrwxrwx  1 root root     9 Feb  5  2013 lock -> /run/lock
drwxr-xr-x  7 root root  4096 Jul 30  2013 log
drwxrwsr-x  2 root mail  4096 Feb  5  2013 mail
drwxr-xr-x  2 root root  4096 Feb  5  2013 opt
lrwxrwxrwx  1 root root     4 Feb  5  2013 run -> /run
drwxr-xr-x  4 root root  4096 Feb  5  2013 spool
drwxrwxrwt  2 root root  4096 Feb  5  2013 tmp
drwxrwx---  2 root root  4096 Jul 30  2013 www

Can anyone tell me what are the correct permissions to have on the www folder, and even better how to obtain them?

I think it should be

drwxrwxr-x  2 root root  4096 Jul 30  2013 www

Thanks

HTTPClient Example – Exception in thread "main" java.lang.NoSuchFieldError: INSTANCE

I am using HttpClient components from Apache for the following simple program and I see the below exception -

Exception in thread "main" java.lang.NoSuchFieldError: INSTANCE
    at org.apache.http.impl.io.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:52)
    at org.apache.http.impl.io.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:56)
    at org.apache.http.impl.io.DefaultHttpRequestWriterFactory.(DefaultHttpRequestWriterFactory.java:46)
    at org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:72)
    at org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:84)
    at org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.(ManagedHttpClientConnectionFactory.java:59)
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$InternalConnectionFactory.(PoolingHttpClientConnectionManager.java:487)
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.(PoolingHttpClientConnectionManager.java:147)
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.(PoolingHttpClientConnectionManager.java:136)
    at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.(PoolingHttpClientConnectionManager.java:112)
    at org.apache.http.impl.client.HttpClientBuilder.build(HttpClientBuilder.java:726)
    at com.starwood.rms.controller.property.HttpExample.main(HttpExample.java:14)

public class HttpExample {

/** * @param args */ public static void main(String[] args) { HttpClient client = HttpClientBuilder.create().build(); HttpGet request = new HttpGet("https://www.google.com/?q=java"); try { HttpResponse response = client.execute(request); System.out.println(response.getStatusLine()); } catch (Exception e) { e.printStackTrace(); } }

}

I am using

Httpclient-4.3.3.jar

Httpcore-4.3.2.jar

Any ideas?