Imagefactory HVM support for EC2

John Hover jhover at bnl.gov
Thu Jan 15 17:46:11 UTC 2015


On 01/15/2015 09:26 AM, Ian McLeod wrote:
> On 01/14/2015 03:30 PM, John Hover wrote:
>> Hi,
>>
>> We're heavy users of Imagefactory for building VMs to run scientific
>> data processing on EC2 and on several private Openstack clusters.
>>
>> Four questions:
>>
>> Do you know if/when Imagefactory will begin supporting builds of
>> HVM-compatible images on EC2? The newer instance types no longer support
>> the PV images currently created. This is important to us because we
>> often run 20000-50000 cores at a time, and getting that kind of capacity
>> requires using multiple instance types.
>
> I'd be happy to work this through.  My impression is that it should
> actually be a bit easier to do HVM, since we won't have to worry about
> creating pvgrub compatible configuration files.
>
> Can you give me just a bit more detail about how you are using the
> existing EC2 support?  Are you generating S3 or EBS backed images (or
> both)?  Are you using the REST API or the CLI.

Sure. We are building batch execution nodes that run in EC2 which, when 
run, connect back to our home system to run scientific analysis jobs 
(analyzing data from the Large Hadron Collider at CERN in Switzerland).
The nodes are basic CentOS with various middleware and applications 
added in. They are EBS instances, using the defaults for the EC2 plugin 
in Imagefactory (version 1.1.6 currently, built from github checkout).

All this is done in the context of the OSG (Open Science Grid), the US 
grid computing infrastructure (http://www.opensciencegrid.org/). In 
fact, OSG is considering using Imagefactory in *their* Koji build system 
as well--both for in-house test images and for public EC2 images (that 
other scientists can use).

>
> If you can give me a concrete example of the input to an actual build,
> stripped of anything you don't wish to share externally, that would be
> ideal.

Sure, I have a custom build wrapper tool, that calls imagefactory after 
creating the tdl from several templates and input files.

Here are the command lines it calls, just parsing the UUID after each 
step. See attached (and slightly sanitized tdl).

imagefactory --debug base_image 
work/centos65-x86_64-bare-cloudconfig-condor-stable-osg-atlas.tdl

imagefactory --debug target_image --id 
caf61ccc-53ba-4cf0-80dc-d85499277b9d ec2

imagefactory --debug provider_image --id 
8bc57601-0be3-404a-8442-c977e3b51faa ec2 @us-east-1 
/root/etc/ec2_credentials.xml

At the end we've got a PV AMI ready to run.

>
>> I see no traffic at all on imagefactory-devel. Is this because the real
>> discussions are happening elsewhere, or is it really not a very active
>> project?
>>
>> Is Redhat intending to continue Imagefactory as its image authoring
>> framework? I built a lot of infrastructure around Boxgrinder, only to
>> have it deprecated. I hope we're not facing the same situation with
>> Imagefactory.
>>
>> We have several simple patches that have made things easier for us and
>> might be useful to others. Do you take pull requests via github, or some
>> other mechanism?
>
> PRs on github are most welcome.

OK. At this point my main change is to add more useful information to 
the AMI name field on EC2--things like build date and template name, 
such that when you sort the field in the AWS console you can easily see 
the most recent image. AWS doesn't include a 'build date' field at all 
for AMIs!

> Imagefactory is now being actively used in a number of contexts and I
> have every intention of continuing to develop and maintain it.

That's great news.

>
> Two quick examples:
>
> It's the backing tool that creates most of the images in Fedora's koji
> instance:
>
> http://koji.fedoraproject.org/koji/tasks?state=all&view=tree&method=image&order=-id
>
> It's incorporated into the "toolbox" code used to generate disk images
> for project Atomic:
>
> https://github.com/projectatomic/rpm-ostree-toolbox
>
> I apologize for the lack of mailing list traffic or other announcements
> of progress.  I'm regularly reminded that communication and community
> development are important and will try to do more :-).

No worries. Same here.

--john

>
> -Ian
>
>
>>
>> Thanks,
>>
>> --john
>>
>>
>>   --
>> John Hover
>> Group Leader | Grid Group/Experiment Services
>> RHIC/ATLAS Computing Facility | Brookhaven National Laboratory
>> jhover at bnl.gov | 631-344-5828 | http://www.racf.bnl.gov/Members/jhover
>>
>


-- 
John Hover
Group Leader | Grid Group/Experiment Services
RHIC/ATLAS Computing Facility | Brookhaven National Laboratory
jhover at bnl.gov | 631-344-5828 | http://www.racf.bnl.gov/Members/jhover
-------------- next part --------------
<template>
   <name>centos65-bare-cloud-condor-stable-osg-atlas</name>
  <os>
    <rootpw>XXXXXXXXX</rootpw>
    <name>RHEL-6</name>
    <version>5</version>
    <arch>x86_64</arch>
    <install type="iso">
      <iso>http://dev.racf.bnl.gov/yum/snapshots/rhel6/x86_64-iso/CentOS-6.5-x86_64-bin-DVD1.iso</iso>
    </install>
  </os>
  <description>Adds ATLAS worker node requirements to general VM</description>

<packages>  
    <package name="bash" />
    <package name="bind-utils" />
    <package name="curl" />
    <package name="dhclient" />
    <package name="git" />
    <package name="lsof" />
    <package name="ntp" />
    <package name="openssh-clients" />
    <package name="openssh-server" />
    <package name="passwd" />
    <package name="redhat-lsb" />
    <package name="rpm" />
    <package name="rpm-libs" />
    <package name="telnet" />
    <package name="unzip" />
    <package name="vim-enhanced" />
    <package name="wget" />
    <package name="yum" />
    <package name="zip" />
    <package name="puppet" />
    <package name="cloud-init" />
    <package name="provisioning-config" />
    <package name="condor" />
    <package name="osg-ca-certs" />
    <package name="osg-wn-client" />
    <package name="yum-priorities" />
    <package name="bc" />
    <package name="blas" />
    <package name="blas.i686" />
    <package name="blas-devel" />
    <package name="blas-devel.i686" />
    <package name="compat-db.i686" />
    <package name="compat-db" />
    <package name="compat-openldap.i686" />     
    <package name="compat-openldap" />  
    <package name="compat-readline5" />
    <package name="compat-readline5.i686" />
    <package name="lfc-python" />
    <package name="ncurses" />
    <package name="ncurses-libs.i686" />  
    <package name="compat-glibc" />   
    <package name="compat-glibc-headers" />     
    <package name="compat-libf2c-34.i686" />     
    <package name="compat-libf2c-34" />   
    <package name="libgcc" />
    <package name="libgcc.i686" />
    <package name="freetype" />
    <package name="freetype.i686" />
    <package name="ghostscript" />
    <package name="ghostscript.i686" />
    <package name="giflib" />
    <package name="giflib.i686" />
    <package name="glibc.i686" />
    <package name="HEP_OSlibs_SL6" />     
    <package name="libaio" />
    <package name="libaio.i686" />
    <package name="lapack.i686" />               
    <package name="lapack" />
    <package name="libevent-devel" />    
    <package name="compat-libgfortran-41.i686" />     
    <package name="compat-libgfortran-41" />
    <package name="libgfortran.i686" />     
    <package name="libgfortran" />
    <package name="libxml2-devel" />     
    <package name="libxml2-devel.i686" />    
    <package name="libXpm.i686" />   
    <package name="libXpm" />    
    <package name="procmail" />
    <package name="sharutils" />
    <package name="sqlite" />
    <package name="sqlite.i686" />    
  </packages>

  <repositories>   
    <repository name="centos65-x86_64-base">
      <url>http://dev.racf.bnl.gov/yum/snapshots/rhel6/centos65-x86_64-base</url>
      <signed>no</signed>
    </repository>
    
    <repository name="centos65-x86_64-update">    
      <url>http://dev.racf.bnl.gov/yum/snapshots/rhel6/centos65-x86_64-update</url>
      <signed>no</signed>
    </repository>
  
  <repository name="racf-grid-testing">
    	<url>http://dev.racf.bnl.gov/yum/grid/testing/rhel/6Workstation/x86_64/</url>
      	<signed>no</signed>
	</repository>

    <repository name="puppetlabs-products">
       <url>http://dev.racf.bnl.gov/yum/snapshots/6Workstation/puppetlabs-products-2014-06-05/</url>
      <signed>no</signed>
    </repository>

    <repository name="puppetlabs-deps">
       <url>http://dev.racf.bnl.gov/yum/snapshots/6Workstation/puppetlabs-deps-2014-06-05/</url>
      <signed>no</signed>
    </repository>
	
    <repository name="htcondor-stable">
      <url>http://dev.racf.bnl.gov/yum/snapshots/rhel6/htcondor-stable-2014-09-11</url>
      <signed>no</signed>
    </repository>
  <repository name="osg-release-x86_64">
      <url>http://dev.racf.bnl.gov/yum/snapshots/6Workstation/osg-release-3.2.3-2014-01-30</url>
      <signed>no</signed>
    </repository>
   
    <repository name="epel">
     <url>http://dev.racf.bnl.gov/yum/snapshots/rhel6/epel-x86_64</url>
      <signed>no</signed>
    </repository>

  <repository name="cern-extras">
      <url>http://dev.racf.bnl.gov/yum/snapshots/6Workstation/cern-extras-2014-01-30</url>
      <signed>no</signed>
    </repository>        
  </repositories>
   <commands>
    <command name="basesetup">
    /usr/bin/yum-config-manager --disable base
    /usr/bin/yum-config-manager --disable updates
    /usr/bin/yum-config-manager --disable extras
    /usr/sbin/setenforce 0
    </command>
   
  <command name="cloudsetup">
	    /sbin/chkconfig cloud-init on
	    /sbin/chkconfig --add mounthome
	    chmod ugo+x /etc/init.d/mounthome
    </command>
   
  <command name="addusers">
      /usr/sbin/useradd slot1
      /usr/sbin/useradd slot2
      /usr/sbin/useradd slot3
      /usr/sbin/useradd slot4
      /usr/sbin/useradd slot5
      /usr/sbin/useradd slot6
      /usr/sbin/useradd slot7
      /usr/sbin/useradd slot8
      /usr/sbin/useradd slot9
      /usr/sbin/useradd slot10      
      /usr/sbin/useradd slot11
      /usr/sbin/useradd slot12
      /usr/sbin/useradd slot13
      /usr/sbin/useradd slot14
      /usr/sbin/useradd slot15
      /usr/sbin/useradd slot16
      /usr/sbin/useradd slot17
      /usr/sbin/useradd slot18
      /usr/sbin/useradd slot19
      /usr/sbin/useradd slot20      
      /usr/sbin/useradd slot21
      /usr/sbin/useradd slot22
      /usr/sbin/useradd slot23
      /usr/sbin/useradd slot24
      /usr/sbin/useradd slot25
      /usr/sbin/useradd slot26
      /usr/sbin/useradd slot27
      /usr/sbin/useradd slot28
      /usr/sbin/useradd slot29
      /usr/sbin/useradd slot30      
      /usr/sbin/useradd slot31
      /usr/sbin/useradd slot32      
    </command>

    <command name="condorsetup">
      mkdir -p /home/condor/execute
      chown -R condor:condor /home/condor
      chmod ugo+rwx /home/condor/execute
      chmod +t /home/condor/execute
      chown root:root /usr/libexec/jobwrapper.sh
      chmod +x /usr/libexec/jobwrapper.sh
      chown root:root /etc/condor/password_file
      chmod o-rwx /etc/condor/password_file
      chmod +x /etc/init.d/condorconfig
      chown root:root /usr/libexec/jobwrapper.sh
      chmod -R ugo+r /etc/condor/config.d
      chmod ugo+rx /usr/libexec/jobwrapper.sh
      /sbin/chkconfig condor on
      /sbin/chkconfig condorconfig on
    </command>
   <command name="osgsetup">
      /sbin/chkconfig fetch-crl-boot on
      /sbin/chkconfig fetch-crl-cron on
      mkdir -p /home/osg/app
      mkdir -p /home/osg/data
      chmod ugo+r /etc/profile.d/osg.sh
      chmod ugo+rwx /home/osg/app
      chmod ugo+rwx /home/osg/data
    </command>    
  <command name="atlassetup">
      chmod ugo+rx /home/osg/app/atlas_app/copysetup.sh
      chmod ugo+rx /home/osg/app/atlas_app/local/setup.sh
      mkdir -p /home/cvmfs
      mkdir -p /home/cvmfs/shared
      chown -R cvmfs:cvmfs /home/cvmfs
      chown -R cvmfs:cvmfs /etc/cvmfs
      chmod -R ugo+rx /etc/cvmfs
      chmod -R ugo+r /etc/profile.d/atlas.sh
      chmod -R ugo+rx /home/cvmfs
      chmod -R ugo+r /etc/cvmfs 
     /sbin/chkconfig cvmfs on
     /bin/ln -s /cvmfs/atlas.cern.ch/repo/sw /home/osg/app/atlas_app/atlas_rel
   </command>        
  </commands> 
 

<files>
         <file name="/etc/ntp/step-tickers">0.rhel.pool.ntp.org 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org</file>
         <file name="/etc/selinux/config">SELINUX=disabled
SELINUXTYPE=targeted
</file>
         <file name="/etc/security/limits.d/80-limits.conf">*        -    nofile         4096</file>
  <file name="/etc/sysconfig/condorconfig">CONDOR_PORT_LIST=29661,29662,29663,29664,29665,29666,29667,29668,29669,29670,29671,29672,29673,29674,29675,29676,29677,29678,29679,29680
CONDOR_PASSWORD=/etc/condor/password</file>
         <file name="/etc/condor/config.d/50cloud_condor.config"># Pool and worker setup
CONDOR_HOST = gridtest06.racf.bnl.gov
COLLECTOR_NAME = Cloud Condor at $(FULL_HOSTNAME)
START = TRUE
SUSPEND = FALSE
PREEMPT = FALSE
KILL = FALSE
RANK = 0
CLAIM_WORKLIFE = 3600
JOB_RENICE_INCREMENT=0
GSI_DELEGATION_KEYBITS = 1024

DAEMON_LIST = MASTER, STARTD
UID_DOMAIN = localhost.localdomain

# Network setup. Use shared port. 
COLLECTOR_HOST=$(CONDOR_HOST):29618
UPDATE_COLLECTOR_WITH_TCP = True
UPDATE_INTERVAL = 30
CCB_ADDRESS = $(COLLECTOR_HOST)
PRIVATE_NETWORK_NAME = localdomain
HIGHPORT = 30000 
LOWPORT = 20000
USE_SHARED_PORT = TRUE
DAEMON_LIST =  $(DAEMON_LIST) SHARED_PORT


# Security
# Use password security.

ALLOW_WRITE = condor_pool@*
SEC_DEFAULT_AUTHENTICATION = REQUIRED
SEC_DEFAULT_AUTHENTICATION_METHODS = PASSWORD, FS
SEC_PASSWORD_FILE = /etc/condor/password_file
SEC_DEFAULT_ENCRYPTION = REQUIRED
SEC_DEFAULT_INTEGRITY = REQUIRED

SEC_ENABLE_MATCH_PASSWORD_AUTHENTICATION  = True
ALLOW_WRITE = $(ALLOW_WRITE), submit-side at matchsession/*
ALLOW_ADMINISTRATOR = condor_pool@*/*


# Job environment config
NUM_SLOTS = 1
USER_JOB_WRAPPER = /usr/libexec/jobwrapper.sh
SLOT1_USER = slot1
SLOT2_USER = slot2
SLOT3_USER = slot3
SLOT4_USER = slot4
SLOT5_USER = slot5
SLOT6_USER = slot6
SLOT7_USER = slot7
SLOT8_USER = slot8
SLOT9_USER = slot9
SLOT10_USER = slot10
SLOT11_USER = slot11
SLOT12_USER = slot12
SLOT13_USER = slot13
SLOT14_USER = slot14
SLOT15_USER = slot15
SLOT16_USER = slot16
SLOT17_USER = slot17
SLOT18_USER = slot18
SLOT19_USER = slot19
SLOT20_USER = slot20
SLOT21_USER = slot21
SLOT22_USER = slot22
SLOT23_USER = slot23
SLOT24_USER = slot24
SLOT25_USER = slot25
SLOT26_USER = slot26
SLOT27_USER = slot27
SLOT28_USER = slot28
SLOT29_USER = slot29
SLOT30_USER = slot30
SLOT31_USER = slot31
SLOT32_USER = slot32

SLOT1_1_USER = slot1
SLOT1_2_USER = slot2
SLOT1_3_USER = slot3
SLOT1_4_USER = slot4
SLOT1_5_USER = slot5
SLOT1_6_USER = slot6
SLOT1_7_USER = slot7
SLOT1_8_USER = slot8
SLOT1_9_USER = slot9
SLOT1_10_USER = slot10
SLOT1_11_USER = slot11
SLOT1_12_USER = slot12
SLOT1_13_USER = slot13
SLOT1_14_USER = slot14
SLOT1_15_USER = slot15
SLOT1_16_USER = slot16
SLOT1_17_USER = slot17
SLOT1_18_USER = slot18
SLOT1_19_USER = slot19
SLOT1_20_USER = slot20
SLOT1_21_USER = slot21
SLOT1_22_USER = slot22
SLOT1_23_USER = slot23
SLOT1_24_USER = slot24
SLOT1_25_USER = slot25
SLOT1_26_USER = slot26
SLOT1_27_USER = slot27
SLOT1_28_USER = slot28
SLOT1_29_USER = slot29
SLOT1_30_USER = slot30
SLOT1_31_USER = slot31
SLOT1_32_USER = slot32

DEDICATED_EXECUTE_ACCOUNT_REGEXP = slot.+
STARTER_ALLOW_RUNAS_OWNER = False
EXECUTE = /home/condor/execute

# Partitionable slots
SLOT_TYPE_1 = 100%
NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
SLOT_TYPE_1_PARTITIONABLE = True
SlotWeight = Cpus


# Debug settings
#ALL_DEBUG = D_FULLDEBUG, D_COMMAND, D_SECURITY, D_NETWORK
#STARTD_DEBUG = D_PID D_COMMAND D_JOB D_MACHINE

</file>
         <file name="/etc/condor/password_file">XXXXXXXXX
</file>
         <file name="/etc/condor/password">XXXXXXX
</file>
         <file name="/etc/init.d/condorconfig">#! /bin/sh
#
# Simple script to create a config file so the
# startd will connect to a randomized collector port. 
#
#
# chkconfig: 2345 80 20
# description: Condor HTC computing platform
#

# Source function library.
. /etc/rc.d/init.d/functions

if [ -f /etc/sysconfig/condorconfig ]; then
	. /etc/sysconfig/condorconfig
fi


config_collector() {
    # Pick a random collector to use
	collector_config=/etc/condor/config.d/90collector.config
	port_list=$CONDOR_PORT_LIST
	let seed=`date +%s`+$$
	echo $port_list | awk "BEGIN{srand($seed)}"'{split($0,g,","); for (i in g) print rand() "\tCOLLECTOR_PORT=" g[i]}' |sort -n|awk '{print $2}'|tail -1 &gt;$collector_config
    echo &gt;&gt; $collector_config
	echo 'COLLECTOR_HOST=$(CONDOR_HOST):$(COLLECTOR_PORT)'&gt;&gt;$collector_config

}

config_attrs() {
	# If available, publish public ip and instanceid via Startd
	# Seen to work on EC2 and Openstack v4
	
	    attrfile=/etc/condor/config.d/92cloudattrs.config
        PUBID=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/public-ipv4`
        PUBDNS=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/public-hostname`
        IID=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id`
        ITYPE=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/instance-type`
        echo "EC2PublicIP = \"$PUBID\"" &gt; $attrfile
        echo "EC2PublicDNS = \"$PUBDNS\"" &gt;&gt; $attrfile
        echo "EC2InstanceID = \"$IID\"" &gt;&gt; $attrfile
        echo "EC2InstanceType = \"$ITYPE\"" &gt;&gt; $attrfile
        echo 'STARTD_EXPRS = $(STARTD_EXPRS) EC2InstanceID EC2PublicIP EC2PublicDNS EC2InstanceType' &gt;&gt; $attrfile
        echo 'MASTER_EXPRS = $(MASTER_EXPRS) EC2InstanceID EC2PublicIP EC2PublicDNS EC2InstanceType' &gt;&gt; $attrfile
        chmod a+r $attrfile

}

calc_slots() {
  #
  # Since cloud nodes have virtual CPUs anyway, calculate NUM_CPUS based on 
  # desired memory. 2G by default, but provide for future minimums via 
  # Userdata....
  #
        if [ "$1X" = "X" ]; then
                minper=2000000
        else
                minper=$1
        fi
        mem=`cat /proc/meminfo | grep MemTotal | awk '{print $2}'`
        numcpus=`cat /proc/cpuinfo | grep "^processor" | wc -l`
        numslots=$(($mem / $minper))
        if [ $numslots -lt 1 ]; then
           numslots=1
        fi
        if [ $numslots -gt $numcpus ] ; then
            numslots=$numcpus
        fi
        echo $numslots
}

config_slots() {
	#
	# Determine number of slots/cpus 
	#
	#
	slot_config=/etc/condor/config.d/91slots.config
	RETVAL=0
    numcpus=`cat /proc/cpuinfo | grep "^processor" | wc -l`
	numslots=$(calc_slots)
	echo "NUM_CPUS=$numcpus"&gt;$slot_config
	#echo "NUM_SLOTS=$numslots"&gt;&gt;$slot_config
 	echo &gt;&gt; $slot_config
	return $RETVAL
}

config_password() {
	CPASSWD=`cat /etc/condor/password`
	/usr/sbin/condor_store_cred -p $CPASSWD -f /etc/condor/password_file
	chmod ugo-x /etc/condor/password_file
	chmod go-wx /etc/condor/password_file
	chmod o-r /etc/condor/password_file
}

start() {
	echo -n $"Setting random collector, slots, and auth: "
	# Temporarily disabled until condor 8.1.x resolved.  
	config_collector
	config_slots
	config_attrs
	config_password
	RETVAL=$?
	echo
	return $RETVAL

}

stop() {
	RETVAL=0
	return $RETVAL

}

restart() {
        stop
        start
}

case "$1" in
start)
        start
        ;;
stop)
        stop
        ;;
restart)
        restart
        ;;
*)
        echo $"Usage: $0 {start|stop|restart}"
        RETVAL=2
esac

exit $RETVAL

</file>
         <file name="/usr/libexec/jobwrapper.sh">#!/bin/bash -l
#
# Condor startd jobwrapper
# Executes using bash -l, so that all /etc/profile.d/*.sh scripts are sourced. 
#
THISUSER=`/usr/bin/whoami`
export HOME=`getent passwd $THISUSER | awk -F : '{print $6}'`
exec "$@"</file>
         <file name="/etc/condor/condor_config.local">#
# Empty file to make Condor happy. 
# Full config to come from config.d
#</file>
         <file name="/etc/condor/config.d/60wnbase.config">NodeType = "base"
STARTD_ATTRS = $(STARTD_ATTRS) NodeType

</file>
  <file name="/etc/profile.d/osg.sh">#
# Handle setup for OSG worker node.  
#
export OSG_GRID=/etc/osg/wn-client
export OSG_APP=/home/osg/app
export OSG_DATA=/home/osg/data
export OSG_WN_TMP=/home/osg/data
export LCG_GFAL_INFOSYS="is.grid.iu.edu:2170"

</file>
  <file name="/home/osg/app/atlas_app/copysetup.sh">#!/bin/bash
if [ -z $DQ2_HOME ]; then
source /cvmfs/atlas.cern.ch/repo/sw/ddm/latest/setup.sh
fi

</file>
         <file name="/etc/cvmfs/config.d/cms.cern.ch.conf">export CMS_LOCAL_SITE
</file>
         <file name="/etc/cvmfs/keys/opensciencegrid.org.pub">-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqQGYXTp9cRcMbGeDoijB
gKNTCEpIWB7XcqIHVXJjfxEkycQXMyZkB7O0CvV3UmmY2K7CQqTnd9ddcApn7BqQ
/7QGP0H1jfXLfqVdwnhyjIHxmV2x8GIHRHFA0wE+DadQwoi1G0k0SNxOVS5qbdeV
yiyKsoU4JSqy5l2tK3K/RJE4htSruPCrRCK3xcN5nBeZK5gZd+/ufPIG+hd78kjQ
Dy3YQXwmEPm7kAZwIsEbMa0PNkp85IDkdR1GpvRvDMCRmUaRHrQUPBwPIjs0akL+
qoTxJs9k6quV0g3Wd8z65s/k5mEZ+AnHHI0+0CL3y80wnuLSBYmw05YBtKyoa1Fb
FQIDAQAB
-----END PUBLIC KEY-----
</file>
         <file name="/etc/sysconfig/modules/fuse.modules">#!/bin/sh
MODULES="fuse"
for i in $MODULES ; do
        modprobe $i &gt;/dev/null 2&gt;&amp;1
done
</file>
         <file name="/etc/cvmfs/domain.d/cern.ch.local"># Local overrides for the default settings for the cern.ch domain:
CVMFS_SERVER_URL="http://cvmfs.racf.bnl.gov:8000/cvmfs/@org@;http://cvmfs-stratum-one.cern.ch:8000/opt/@org@;http://cernvmfs.gridpp.rl.ac.uk:8000/opt/@org@"
CVMFS_PUBLIC_KEY=/etc/cvmfs/keys/cern.ch.pub:/etc/cvmfs/keys/cern-it1.cern.ch.pub:/etc/cvmfs/keys/cern-it3.cern.ch.pub
</file>
         <file name="/etc/cvmfs/config.d/atlas-nightlies.cern.ch.conf">CVMFS_SERVER_URL=http://cvmfs-atlas-nightlies.cern.ch/cvmfs/atlas-nightlies.cern.ch
</file>
         <file name="/home/osg/app/atlas_app/local/setup.sh">export ATLAS_POOLCOND_PATH="/cvmfs/atlas.cern.ch/repo/conditions"
export LFC_HOST=lfc.usatlas.bnl.gov
export FRONTIER_SERVER="(serverurl=http://frontier.racf.bnl.gov:8000/frontieratbnl)(serverurl=http://lcgft-atlas.gridpp.rl.ac.uk:3128/frontierATLAS)(proxyurl=http://frontier-cache.racf.bnl.gov:3128)"
#export FRONTIER_LOG_LEVEL=warning
export FRONTIER_LOG_LEVEL=debug
export FRONTIER_LOG_FILE=frontier_client.log
#export FRONTIER_READTIMEOUTSECS=60
# allow local override at end
[ -f $OSG_APP/atlas_app/local/setup.sh.local ] &amp;&amp; source $OSG_APP/atlas_app/local/setup.sh.local
export VOMS_PROXY_INFO_DONT_VERIFY_AC="true"
</file>
         <file name="/etc/auto.master">/misc  /etc/auto.misc
/net -hosts
+auto.master
/cvmfs /etc/auto.cvmfs</file>
         <file name="/etc/cvmfs/domain.d/opensciencegrid.org.local"># Local overrides for the default settings for the osg domain:
CVMFS_SERVER_URL="http://cvmfs.racf.bnl.gov:8000/cvmfs/@org@;http://oasis.opensciencegrid.org:8000/cvmfs/@org@"
CVMFS_PUBLIC_KEY=/etc/cvmfs/keys/opensciencegrid.org.pub
</file>
         <file name="/etc/cvmfs/default.local">CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,atlas-nightlies.cern.ch,cms.cern.ch,geant4.cern.ch,oasis.opensciencegrid.org
# CVMFS_REPOSITORIES=alice.cern.ch,atlas.cern.ch,atlas-condb.cern.ch,atlas-nightlies.cern.ch,boss.cern.ch,cms.cern.ch,geant4.cern.ch,grid.cern.ch,lhcb.cern.ch,na61.cern.ch,oasis.opensciencegrid.org,sft.cern.ch
CVMFS_HTTP_PROXY="http://frontier-cache.racf.bnl.gov:3128|http://frontier-cache.racf.bnl.gov:3128;DIRECT"
CVMFS_CACHE_BASE=/home/cvmfs
CVMFS_QUOTA_LIMIT=30000
# r/w client fix for EL 6 kernel:
CVMFS_MOUNT_RW=yes
</file>
         <file name="/etc/condor/config.d/63wn_atlas.config">NodeType = "atlas"
STARTD_ATTRS = $(STARTD_ATTRS) NodeType

</file>
         <file name="/etc/profile.d/atlas.sh">#
# Setup ATLAS-specific things...
#
export RUCIO_HOME=/cvmfs/atlas.cern.ch/repo/sw/ddm/rucio-clients/0.1.12
export RUCIO_AUTH_TYPE=x509_proxy

if [ -z $DQ2_HOME ]; then
. /cvmfs/atlas.cern.ch/repo/sw/ddm/latest/setup.sh
fi
export VOMS_PROXY_INFO_DONT_VERIFY_AC="true"
export VO_ATLAS_SW_DIR=/cvmfs/atlas.cern.ch/repo/sw
export ATLAS_LOCAL_AREA=/home/osg/app/atlas_app/local

</file>
         <file name="/etc/fuse.conf">user_allow_other

</file>
  </files>
</template>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3517 bytes
Desc: S/MIME Cryptographic Signature
URL: <https://lists.fedorahosted.org/pipermail/imagefactory-devel/attachments/20150115/8aa0d799/attachment-0001.p7s>


More information about the imagefactory-devel mailing list