Just a bit OT here... I'm trying to figure out whether I can do this with subversion or not: We're starting to revamp one of our websites, but are keeping the current one live and slowly push updated code out. Right now we have our live server and we have a dev server which is a mirror of the live one (or started out as such.) Is there some way to incorporate subversion into this process where our developers can use it such as:
a) They can check out files they're going to work on, then back in when they're done. But have it setup in such a ay that those files are accessible through the normal http protocol so they can look at it through a browser (as if they're just browsing a website, NOT browsing code.)
b) When we approve the updated code, a way to then push them through to the live server.
I know one can have a web "view" of all the files, but that's looking at the actual code (and this will still need to be available) however we also want the repository setup in such a way that they can view the actual site as they're working on code. And once we can approve the updates, someone (probably me) will then push the new changes from this dev server onto the live one. And on the live one, we would need a way to roll back if something breaks, however unlikely.
So, has anyone ever done this? Is there documentation somewhere that anyone can point out to me? Thanks!
-- A
Easy!!
build the subversion repo on the dev web server, in the /var/website directory, thus, all updated code will be displayed live on the webserver.
And to go from dev to live site, a quick svn co from the live site, and bob's your uncle.
On Nov 7, 2007 5:15 PM, Ashley M. Kirchner ashley@pcraft.com wrote:
Just a bit OT here... I'm trying to figure out whether I can do this with subversion or not: We're starting to revamp one of our websites, but are keeping the current one live and slowly push updated code out. Right now we have our live server and we have a dev server which is a mirror of the live one (or started out as such.) Is there some way to incorporate subversion into this process where our developers can use it such as:
a) They can check out files they're going to work on, then back in when they're done. But have it setup in such a ay that those files are accessible through the normal http protocol so they can look at it through a browser (as if they're just browsing a website, NOT browsing code.)
b) When we approve the updated code, a way to then push them through to the live server.
I know one can have a web "view" of all the files, but that's looking at the actual code (and this will still need to be available) however we also want the repository setup in such a way that they can view the actual site as they're working on code. And once we can approve the updates, someone (probably me) will then push the new changes from this dev server onto the live one. And on the live one, we would need a way to roll back if something breaks, however unlikely.
So, has anyone ever done this? Is there documentation somewhere that anyone can point out to me? Thanks!
-- A
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Allan Swanepoel wrote:
Easy!!
build the subversion repo on the dev web server, in the /var/website directory, thus, all updated code will be displayed live on the webserver.
And to go from dev to live site, a quick svn co from the live site, and bob's your uncle.
That will work until you forget to commit something that the site needs and the test site works but the production one won't. What you really want is to have everyone who makes changes use their own working copy (and perhaps their own test server to view it). When they commit, it should then be checked out to a QA/test location with a test server that you can trust to only have what it got through the repository. When the tests there pass, it can go to production, either by updating to the same revision or tag there, or by using "rsync -C" or some similar means to copy the tested state to the production location(s). You can use virtual hosts to combine the development and QA sites if you want, and the repo can be on the same machine or elsewhere - you'll only access working copies directly.
Les Mikesell wrote:
What you really want is to have everyone who makes changes use their own working copy (and perhaps their own test server to view it).
At the moment, that's kinda what's setup. They have their own local copy that they work on. When they're ready, they check in their new code to our test server which we then look at and see what works and doesn't. And only when we approve it, it gets pushed to the live server. They just need to be able to hit that test server and look at the changes they made as if they're just browsing the actual site (through a browser) so we can all get a feel for what the live site will also look like or behave.
Ashley M. Kirchner wrote:
Les Mikesell wrote:
What you really want is to have everyone who makes changes use their own working copy (and perhaps their own test server to view it).
At the moment, that's kinda what's setup. They have their own local copy that they work on. When they're ready, they check in their new code to our test server which we then look at and see what works and doesn't. And only when we approve it, it gets pushed to the live server. They just need to be able to hit that test server and look at the changes they made as if they're just browsing the actual site (through a browser) so we can all get a feel for what the live site will also look like or behave.
The best approach here is to set up virtual servers for views of the development working copies. Depending on the web server, you may need to run these on different ports so several can co-exist on the same machine. This lets development run at its own pace ahead of QA and different people can be working on different changes at the same time.
On Wed, 2007-11-07 at 10:48 -0600, Les Mikesell wrote:
Ashley M. Kirchner wrote:
Les Mikesell wrote:
What you really want is to have everyone who makes changes use their own working copy (and perhaps their own test server to view it).
At the moment, that's kinda what's setup. They have their own local copy that they work on. When they're ready, they check in their new code to our test server which we then look at and see what works and doesn't. And only when we approve it, it gets pushed to the live server. They just need to be able to hit that test server and look at the changes they made as if they're just browsing the actual site (through a browser) so we can all get a feel for what the live site will also look like or behave.
The best approach here is to set up virtual servers for views of the development working copies. Depending on the web server, you may need to run these on different ports so several can co-exist on the same machine. This lets development run at its own pace ahead of QA and different people can be working on different changes at the same time. -- Les Mikesell lesmikesell@gmail.com
If you use a hook that is triggered on commit, you would be able to create a centralized testing area. The subversion book has a few examples that you can pull ideas from.
http://svnbook.red-bean.com/en/1.4/svn.reposadmin.create.html#svn.reposadmin...
--Timothy Selivanow ___________________________________________________________________________ / It's very glamorous to raise millions of dollars, until it's time for the \ | venture capitalist to suck your eyeballs out. | \ -- Peter Kennedy, chairman of Kraft & Kennedy. / --------------------------------------------------------------------------- \ \ \ \ /\ ( ) .( o ).
Les Mikesell wrote:
Ashley M. Kirchner wrote:
Les Mikesell wrote:
What you really want is to have everyone who makes changes use their own working copy (and perhaps their own test server to view it).
At the moment, that's kinda what's setup. They have their own local copy that they work on. When they're ready, they check in their new code to our test server which we then look at and see what works and doesn't. And only when we approve it, it gets pushed to the live server. They just need to be able to hit that test server and look at the changes they made as if they're just browsing the actual site (through a browser) so we can all get a feel for what the live site will also look like or behave.
The best approach here is to set up virtual servers for views of the development working copies. Depending on the web server, you may need to run these on different ports so several can co-exist on the same machine. This lets development run at its own pace ahead of QA and different people can be working on different changes at the same time.
I'm pretty much with Les, but ... 1. A development site that people play with. 2. A test site that is what development proposes QA should have. 3. A QA site, maintained by the QA folk taking releases from testing, fully as if it were the production site. 4. The production site.
Individuals may have their own sandpit. Always, the next stage updates from the previous stage's master repo.
Nobody should have the ability to update code owned by the next stage.
Sometimes, the production site will need to have unscheduled maintenance. Probably, folk at level 1 will generate emergency fixes against their version of what's in production, and someone in production, with the necessary authority, will take the fix and apply it.
It still needs to go through stages 2 & 3 ASAP. Emergency fixes are always a risk, and should only be used when the risk of using them is less than the risk of not.
This is substantially the process we followed in the 80s, it's not new (and there may be refinements), in an organisation of national significance. We used panvalet for source code management, panexec to manage executables, and ACF/II to control access.
The tools aren't that important, the procedures and facilities are.
<moan> If E*Trade followed this sort of procedure, its website would work</moan>
John Summerfield wrote:
The best approach here is to set up virtual servers for views of the development working copies. Depending on the web server, you may need to run these on different ports so several can co-exist on the same machine. This lets development run at its own pace ahead of QA and different people can be working on different changes at the same time.
I'm pretty much with Les, but ...
- A development site that people play with.
- A test site that is what development proposes QA should have.
- A QA site, maintained by the QA folk taking releases from testing,
fully as if it were the production site. 4. The production site.
Individuals may have their own sandpit. Always, the next stage updates from the previous stage's master repo.
I'd say individuals _must_ have their own working copy. There are different schools of thought on how development should be committed to the repository, though. Some like to make branches for every change and merge them to the trunk after they are complete. Personally I think that is too much work and like to have all new development on the trunk where everyone can easily update to pick up the changes others have made. The QA workspace is then periodically updated to pick up the new work, and when everything is good, makes a tag or branch for the release and production is updated to that state.
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has their own workspace, including QA/testing, and they get the revision they ask for, regardless of what else is happening in the repository. Generally you just want to pass tags or revision numbers around to indicate what is ready for the next stage.
Sometimes, the production site will need to have unscheduled maintenance. Probably, folk at level 1 will generate emergency fixes against their version of what's in production, and someone in production, with the necessary authority, will take the fix and apply it.
You should always have release branches/tags so at any point you can either revert to an exact copy of a prior release or pull an exact copy of the version you need to fix into a workspace to fix it.
It still needs to go through stages 2 & 3 ASAP. Emergency fixes are always a risk, and should only be used when the risk of using them is less than the risk of not.
You still want this committed to a branch so you'll be able to repeat the procedure.
Les Mikesell wrote:
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has
It's essential. You don't want everyone to be able to mess with production code. Nobody can certify code they don't control. If I can apply a little vim or emacs to your repo, you're sunk. Just let the auditors ask, "Who can change this source code?" and "We will try."
Essentially, we cloned the libraries of source code, and each stage (to the best of my recollection) built their own executables.
If every source file's digitally signed, that's probably good enough, but old fogies (say, my generation) would probably say not.
John Summerfield wrote:
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has
It's essential. You don't want everyone to be able to mess with production code.
I meant that no one ever changes anything that has ever been committed. Everyone makes changes in their own workspace and a commit becomes a new revision. Anyone can check out any revision that has ever been committed. So, each stage checks out their own appropriate revision or tagged copy based on the workflow regardless of what else is happening in the repository. It doesn't matter that someone can check in garbage, what matters is that the garbage revision not the one that QA tests/approves/tags to go to production.
Nobody can certify code they don't control. If I can apply a little vim or emacs to your repo, you're sunk. Just let the auditors ask, "Who can change this source code?" and "We will try."
You've got unix filesystem permissions and SELinux at your disposal to control direct repository access. And the repository doesn't have to be on the same machine as any of the users.
Essentially, we cloned the libraries of source code, and each stage (to the best of my recollection) built their own executables.
If every source file's digitally signed, that's probably good enough, but old fogies (say, my generation) would probably say not.
If you don't trust your file access control, these don't matter much.
Les Mikesell wrote:
John Summerfield wrote:
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has
It's essential. You don't want everyone to be able to mess with production code.
I meant that no one ever changes anything that has ever been committed. Everyone makes changes in their own workspace and a commit becomes a new revision. Anyone can check out any revision that has ever been committed. So, each stage checks out their own appropriate revision or tagged copy based on the workflow regardless of what else is happening in the repository. It doesn't matter that someone can check in garbage, what matters is that the garbage revision not the one that QA tests/approves/tags to go to production.
How do you propose minimising the possibility of someone of ill intent making unauthorised changes?
Think what DoD, any big bank, Qantas, Westfield or any other significant business would expect?
Lesser organisations may need to take lesser decisions - a business of four people could hardly do that - but the tradeoff is greater risk.
Nobody can certify code they don't control. If I can apply a little vim or emacs to your repo, you're sunk. Just let the auditors ask, "Who can change this source code?" and "We will try."
You've got unix filesystem permissions and SELinux at your disposal to control direct repository access. And the repository doesn't have to be on the same machine as any of the users.
Unix is weak. selinux is cumbersome.
Essentially, we cloned the libraries of source code, and each stage (to the best of my recollection) built their own executables.
If every source file's digitally signed, that's probably good enough, but old fogies (say, my generation) would probably say not.
If you don't trust your file access control, these don't matter much.
Nobody should trust anything they're not forced to: that's what Microsoft means when it talks of "trusted computing."
Do I trust my eye surgeon to operate on my eye properly and skillfully? Yes, I do, because I must have that operation (I might make enquiries to ensure he's trustworthy, but that's another matter).
Do I trust Laurie with keys to my house? Yes I do, because I'm hiring him to paint it, to do up the bathroom (Not Lavatory, its the room where one cleans oneself) etc.
Do I trust my bank with my money? Yes, I do, because I have to have someone to keep my surplus cash safe.
Do I trust any of the above with any of the other above? No, I don't. I don't think my banker could renovate the house or operate on my eye with the skill I expect.
In the context of the Linux kernel, Linus and his lieutenants constitute the development manager. They manage all those who would hack on the Linux code and make changes. Perhaps he also does the testing.
Think of RH, Novel etc as the QA folk. They take the code, clone it, do their own testing and packaging. If the RHEL kernel's broken, RH carries the responsibility of fixing it.
And then sensible enterprise users take their chosen vendors' offering, and since they don't _have_ to trust the vendor, they don't. They do their own QA, before putting into production in their own enterprise. they may well take a copy of the source code too, and some (eg CentOS do exactly that).
Now, we know it's more complicated than that, but that's the basics of it.
John Summerfield wrote:
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has
It's essential. You don't want everyone to be able to mess with production code.
I meant that no one ever changes anything that has ever been committed. Everyone makes changes in their own workspace and a commit becomes a new revision. Anyone can check out any revision that has ever been committed. So, each stage checks out their own appropriate revision or tagged copy based on the workflow regardless of what else is happening in the repository. It doesn't matter that someone can check in garbage, what matters is that the garbage revision not the one that QA tests/approves/tags to go to production.
How do you propose minimising the possibility of someone of ill intent making unauthorised changes?
With revision control systems, you always have access to all versions and the ability to see the differences between them and who made the changes (most useful with text/source). If something is important, I'd expect someone to review the changes as well as performing functional tests on any generated programs.
Think what DoD, any big bank, Qantas, Westfield or any other significant business would expect?
Don't they outsource everything these days?
You've got unix filesystem permissions and SELinux at your disposal to control direct repository access. And the repository doesn't have to be on the same machine as any of the users.
Unix is weak. selinux is cumbersome.
Compared to? How could you tell if something else is better?
If you don't trust your file access control, these don't matter much.
Nobody should trust anything they're not forced to: that's what Microsoft means when it talks of "trusted computing."
Why trust the people supplying something they happen to call "trusted"?
Les Mikesell wrote:
John Summerfield wrote:
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has
It's essential. You don't want everyone to be able to mess with production code.
I meant that no one ever changes anything that has ever been committed. Everyone makes changes in their own workspace and a commit becomes a new revision. Anyone can check out any revision that has ever been committed. So, each stage checks out their own appropriate revision or tagged copy based on the workflow regardless of what else is happening in the repository. It doesn't matter that someone can check in garbage, what matters is that the garbage revision not the one that QA tests/approves/tags to go to production.
How do you propose minimising the possibility of someone of ill intent making unauthorised changes?
With revision control systems, you always have access to all versions and the ability to see the differences between them and who made the changes (most useful with text/source). If something is important, I'd expect someone to review the changes as well as performing functional tests on any generated programs.
If I, a developer, can modify the repo outside the vcs system (which is what I said earlier), how then do you, in Production Control, guarantee its content?
Think what DoD, any big bank, Qantas, Westfield or any other significant business would expect?
Don't they outsource everything these days?
Hardly relevant. I don't think we do.
You've got unix filesystem permissions and SELinux at your disposal to control direct repository access. And the repository doesn't have to be on the same machine as any of the users.
Unix is weak. selinux is cumbersome.
Compared to? How could you tell if something else is better?
Compared with tools I used in the 80s on another platform:
I used ACF/2 before it was CA-ACF2, and I can't find docs to refresh my mind.
Here's how to create two TSO users on z/OS:
ADDUSER (PAJ5 ESH25)
A TSO user has equivalent access to z/OS as a shell user has in Linux. There's much more that one can specify, see http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.3?DT=2...
To create a group: ADDGROUP PROJECTA See http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.1?DT=2...
To connect a user to a group: connect ESH25 group(projecta) Note case is not significant. http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.7?SHEL...
Give projecta members update access to a file, WJE10.DEPT2.DATA PERMIT 'WJE10.DEPT2.DATA' ID(RESEARCH) ACCESS(UPDATE)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.17?SHE...
There are many things that _can_ be specified.
Any number of people could be given access to resources.
SQL has some fairly simple, effective primitives for controlling access to tables.
If I want to give an individual access to a file I am authorised to grant access to, then all I need to know is the name of my file, and the userid of the user. And the command.
If you don't trust your file access control, these don't matter much.
Nobody should trust anything they're not forced to: that's what Microsoft means when it talks of "trusted computing."
Why trust the people supplying something they happen to call "trusted"?
As I explained, at some point you must. Even then, you take every care. z/OS users trust IBM because IBM has a good reputation, and because they must. Even though the IBM software's imperfect.
John Summerfield wrote:
With revision control systems, you always have access to all versions and the ability to see the differences between them and who made the changes (most useful with text/source). If something is important, I'd expect someone to review the changes as well as performing functional tests on any generated programs.
If I, a developer, can modify the repo outside the vcs system (which is what I said earlier), how then do you, in Production Control, guarantee its content?
I don't see how this issue relates to the VCS system all all. If you have access to change files, you can change files whether they are the production copy or some preliminary version. But the first rule to avoid it would be to not give physical access to the machine holding the repo to anyone you don't trust, and that includes the backup tapes that might be used to reconstruct it. Next would be to avoid filesystem access by anyone you don't trust. Since we are talking about subversion, that means using https or svnserve to access the repo. Even disregarding the access control those network services provide, nothing you can do through the client api will ever change an existing repo revision. You can only commit new revisions and it will be possible to check the differences against the old.
Even if you do have filesystem access, making a change in the repo history would be non-trivial since the storage format is based on diffs against other revisions and there will most likely be many remote workspaces that expect specific things to be in their pegged revisions or tag copies, and someone would notice any change.
Think what DoD, any big bank, Qantas, Westfield or any other significant business would expect?
Don't they outsource everything these days?
Hardly relevant. I don't think we do.
It's relevant in terms of what you know about what you should trust.
You've got unix filesystem permissions and SELinux at your disposal to control direct repository access. And the repository doesn't have to be on the same machine as any of the users.
Unix is weak. selinux is cumbersome.
Compared to? How could you tell if something else is better?
Compared with tools I used in the 80s on another platform:
How could you tell?
I used ACF/2 before it was CA-ACF2, and I can't find docs to refresh my mind.
Here's how to create two TSO users on z/OS:
ADDUSER (PAJ5 ESH25)
A TSO user has equivalent access to z/OS as a shell user has in Linux. There's much more that one can specify, see http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.3?DT=2...
To create a group: ADDGROUP PROJECTA See http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.1?DT=2...
To connect a user to a group: connect ESH25 group(projecta) Note case is not significant. http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.7?SHEL...
Give projecta members update access to a file, WJE10.DEPT2.DATA PERMIT 'WJE10.DEPT2.DATA' ID(RESEARCH) ACCESS(UPDATE) http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ICHZA441/5.17?SHE...
There are many things that _can_ be specified.
But how can you prove that they are better than the unix user/group concepts?
Any number of people could be given access to resources.
As is the case in unix which can have any number of groups.
Why trust the people supplying something they happen to call "trusted"?
As I explained, at some point you must. Even then, you take every care. z/OS users trust IBM because IBM has a good reputation, and because they must. Even though the IBM software's imperfect.
I start with the assumption that a certain number of lines of code will statistically have a certain number of accidental flaws, and that anything to do with authentication and access control will have some probability of having intentional backdoors embedded. With the unix uid/group concepts you can sort-of understand what login/su/setuid should be doing, and what has to happen during open(), which in unix is necessary to get access to anything. Even without wading through this code myself, I can have a certain amount of faith that the people who make changes and review this code can get it right most of the time and that any intentional backdoors would be noticed. I have no such faith in any of the more complex and less publicly scrutinized mechanisms, although I would be open to anyone trying to show why such faith would be justified.