31 Dec 2018 | tags: Packer AWS
Rationale
Let’s supose you have multiple AWS credentials configured in your ~/.aws/credentials file. Using Packer you have multiple options to set the credentials to use (using environment variables or hardcoding the credentials in the Packer build file). But what happens if you don’t want to do such things and instead let Packer use different profile in your credentials’ file?
### Using AWS_DEFAULT_PROFILE to let Packer know what profile to use
It’s as easy as using the AWS_DEFAULT_PROFILE variable. That variable tells Packer which credentials to use. That way you don’t need to export or hardcode the credentials anywhere.
Usage
It’d be as simple as:
AWS_DEFAULT_PROFILE=myCredentials packer build ami.json
It seems the AWS_DEFAULT_PROFILE comes from the Python AWS package.
[1] https://github.com/hashicorp/packer/issues/2471
31 Jul 2018 | tags: Tomcat Spring
Rationale
Let’s supose you have a single Tomcat instance. So far so good, the Tomcat will keep the session in memory and everybody would be happy. But what if you want to keep those sessions after a Tomcat restart? Moreover, what if you want to have more than one Tomcat in a cluster?
### Persistence manager to the rescue
Tomcat ships with a module that allows you to persist the sessions either to disk or to a jdbc database. You can tweak that under $CATALINA_HOME/conf/context.xml.
Usage
Let’s suppose you want to save the sessions to a MySQL database. First, create a tomcat_sessions table with the following sql script:
create table tomcat_sessions (
session_id varchar(100) not null primary key,
valid_session char(1) not null,
max_inactive int not null,
lastaccess bigint not null,
app_context varchar(180),
session_data mediumblob,
KEY kapp_context(app_context)
);
Let’s suppose that table is in the test schema. Be careful with the lastaccess field, I guess there’s a bug somewhere, because the only way I managed to make it work was to use that field name.
Once we have it in place we can setup the Tomcat instance. Edit the context.xml file and add the following:
<Manager className="org.apache.catalina.session.PersistentManager"
maxIdleBackup="2"
maxIdleSwap="2"
minIdleSwap="1"
saveOnRestart="true">
<Store className="org.apache.catalina.session.JDBCStore"
connectionURL="jdbc:mysql://localhost/test?user=test&password=test& useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC"
driverName="com.mysql.jdbc.Driver"
sessionIdCol="session_id"
sessionValidCol="valid_session"
sessionMaxInactiveCol="max_inactive"
sessionLastAccessCol="lastaccess"
sessionTable="tomcat_sessions"
sessionAppCol="app_context"
sessionDataCol="session_data"
/>
</Manager>
If there’s another Manager block, please remove it (or comment it out). Use your best judgment to select the right values for you (please refer to the Tomcat documentation at [1]). You will start to see the Tomcat sessions in that table.
[1] https://tomcat.apache.org/tomcat-9.0-doc/config/manager.html
21 Jul 2018 | tags: Jenkins Pipelines AWS Security
Rationale
Let’s supose you save some files to a S3 bucket. However you don’t want to hardcode the credentials in your pipeline (that pipeline may
be stored in a git repository).
To do so, you need to save those credentials in the Jenkins credentials repository and then in the Jenkins pipeline use them.
### Stuff to install in Jenkins
We only need the Pipeline: AWS Steps. With plugin, we can retrieve the credentials in the Jenkins credentials repository and use them in our pipelines.
Usage
Once we have both the credentials and the plugin in place, we can code our pipeline.
I dunno if there is another way to do it, but I usually configure the AWS like this:
pipeline{
agent any
options {
timestamps()
disableConcurrentBuilds()
withAWS(region: 'eu-central-1', credentials: 'Jenkins-AWS-S3-Credentials')
}
The really relevant stuff is the withAWS. Note that the credentials argument, you need to use the same you gave your credentials when you saved them in the Jenkins credentials repository. You’d probably also need to set the region, otherwise the withAWS will try to use the us-west-1 (I believe).
Finally, you may call S3Upload:
steps {
s3Upload(bucket: '$bucketName', includePathPattern: '$patternOfFiles')
}
That will upload the selected files with $patternOfFile in your workspace and upload them to the named $bucketName.
You can find more information regarding the Pipeline: AWS Steps in the project site (https://jenkins.io/doc/pipeline/steps/pipeline-aws/#pipeline-aws-steps)