+ All Categories
Home > Documents > Creating a Grade

Creating a Grade

Date post: 18-Dec-2015
Category:
Upload: ricardo-cesar
View: 3 times
Download: 0 times
Share this document with a friend
Description:
bueno
48
Creating a grade-thickness long section in Leapfrog Ron Reid and Tim Schurr May 26, 2014 One of the most common diagrams you see on stock market releases and on any wall in a mine office is a long section of the ore body. Where suitable this long section will display the ore as a grade * thickness plot, perhaps it is called a gram metre plot, or a gram centimetre plot, a metre ppm plot, perhaps even a metre per cent plot. Whatever your preference these images are simply a long section showing the thickness of the ore body multiplied by the grade. This sort of diagram is really only useful when the ore body is tabular such as a vein or reef, whether it be horizontal or vertical, or where the ore body can be represented as such. It is not really useful to represent a large high sulphidation gold or bulk porphyry this way. If you can estimate or model the ore body using a 2D metal accumulation grid then you can create a useful grade * thickness plot. I was asked recently if I knew if I could do this in Leapfrog. I had not done it before but a grade* thickness plot is just a calculation of the grade times the width – in many estimates we commonly estimate a vein or
Transcript

Creating a grade-thickness long section in LeapfrogRon Reid and Tim Schurr

May 26, 2014

One of the most common diagrams you see on stock market releases and on any wall in a mine office is a long section of the ore body. Where suitable this long section will display the ore as a grade * thickness plot, perhaps it is called a gram metre plot, or a gram centimetre plot, a metre ppm plot, perhaps even a metre per cent plot. Whatever your preference these images are simply a long section showing the thickness of the ore body multiplied by the grade. This sort of diagram is really only useful when the ore body is tabular such as a vein or reef, whether it be horizontal or vertical, or where the ore body can be represented as such. It is not really useful to represent a large high sulphidation gold or bulk porphyry this way. If you can estimate or model the ore body using a 2D metal accumulation grid then you can create a useful grade * thickness plot.

I was asked recently if I knew if I could do this in Leapfrog. I had not done it before but a grade* thickness plot is just a calculation of the grade times the width in many estimates we commonly estimate a vein or reef in 2 dimensions by modelling the thickness and the metal accumulation (grade*thickness) variable and then back calculating the grade as metal accumulation / thickness. It should be possible so I had a play. I found it is possible in LF Mining, LF Geo had me stumped so I flicked the problem to Tim Schurr from ARANZ Geo and thank him for coming up with the solution which I have included below for Leapfrog Geo users. ARANZ Geo have mentioned that they are looking into making this workflow an integral part of the Leapfrog software.LF Mining

Using LF Mining the best way I could think of it to assume you are working with a vein which is usually the case for a grade by thickness view, even if it is not a true vein if the grade lens has a depth x thickness x width this should still work. First you will need to either composite the drillhole assay data to a regular support (see my basics of grade interpolation blog for some ideas on how), or ensure you have a sample thickness field in the assay table this is the best option as it allows you a little more flexibility in modelling the vein. Then create a new interval selection on either the assay table or geology table and select each of the mineralised intervals that form the lens or vein in question.

This process allows you to select those parts of the ore body that make a single vein or lens. This selection is then used to create a vein model. To do this you extract the vein walls based on this selection;

Of course if you have the interval already flagged in the database you can simply create a composite Region and select the code from the correct column and Extract Single Vein from the Processing Actions item.

Figure 1. If your assay table has a code in it you can create a composite by region as a single vein and select the vein in question.

This will create a selection based on the ore or vein flag that allows you to extract the vein walls.

Figure 2. This figure shows the composite by region as a single vein - the red zones show the grade with the footwall and hanging wall points.

From these vein points you create new vein footwall and hangingwall surfaces.

Figure 3. Vein walls have been modelled using the create surface option.

This process creates two separate interpolants that you can then combine to form a medial plane (a plane down the middle) and model the vein.

Figure 4. The process in creating a vein using the combined interpolants, this allows you to create a medial plane (green surface above), and create two vein walls with which to build a new vein model.

I created a structural trend of this plane to drive the thickness and grade interpolations inside the vein domain interpolation but this is not really necessary. Use this plane to create the new vein, selecting the relevant foot and hanging wall points.

Figure 5. Creating a new vein from the combined interpolant is a simple process and doing it this way can commonly create a better outcome than creating the vein without the medial surface - sometimes though the non-combined interpolants are the only way you can get an acceptable outcome.

Figure 6. The final vein with medial plane at the top and coloured by the thickness variable at bottom, the thickness is automatically calculated and an evaluation variable is present under the vein in the file structure.

There is currently no way to evaluate either the medial plane, or a composite file with this thickness variable. To do this we have to export the thickness to a csv file and re-import into the numeric folder. This exports the vein mesh vertices, each with X, Y and Z and the thickness variable.

Figure 7. The points are the mesh vertices coloured by thickness, you can adjust the spacing of the mesh points by changing the resolution of the vein, these three images show the vein with a 10m mesh, a 5m mesh and a 1m mesh. The finer the resolution, the larger the file and the slower the processing

Figure 8. This shows the variation in size for the files shown in the images above. The 10m mesh is fine for what we need to do in this instance, it is easy to change the vein mesh to a finer one for better visual definition once we have exported the thickness variable.

You need to create a new interpolation of this thickness constrained by the vein domain. The next step then is to create a new domain of the vein, we can use this vein to create the new thickness interpolant that will allow us to evaluate the medial plane and the composite and assay files. It is also used to constrain the grade and gram metre interpolations.

Once the domain has been created we can create a subset of the assay data by selecting data using interpolants (for the evaluation) and interpolate the thickness variable using the re-imported vein thickness data and vein domain. To create the thickness interpolant we just run a basic interpolate values process selecting the domain on the surfaces tab.

This will create a nice thickness interpolant with some arbitrary shells for visual display (below).

To evaluate the domained subset of the Assay data against this thickness interpolation we have to create the subset, this is a simple process of right clicking on the domain and selecting selection -> using interpolants and selecting the assay points (or the composite file). Of course to accomplish this you will have needed to have extracted the assay or composite data to the numeric data folder.

Once you have the subset of data you can evaluate the data against the thickness variable.

You then need to export the assay data to a csv for processing. This part is a bit manual and repetitive especially if you have a large dataset but it works.Create a new file where you can filter each hole in the exported csv and average the X, Y and Z coords, and the thickness variable (from the vein), a weighted average grade where you weight the grade against the sample interval width (for each intersection), and calculate a gram metre value (au*thickness) for each hole (the AUOZ and AGOZ fields are just used for the average weighting for the grade, using a straight grams calc will work here just as well the Au*Interval is summed and divided by the total length of the interval to obtain the weighted average), eg;

You transfer the bold line above to a new file so that each drillhole has only 1 line;

Save the new file as a new csv, eg something called Assay_Vein1_gramMetre, import this file into the numeric folder in leapfrog and then generate a new interpolant for grade, thickness (both to check the process worked) and the gram_metre field, all constrained to within the Vein1 domain created earlier.

You then evaluate the interpolants onto the medial plane for presentation purposes. The results are as follows;Grade

Figure 9. Grade, interpolated shells at the top, medial plane evaluation belowVein thickness

Figure 10. Vein Thickness, interpolated shells at the top, medial plane evaluation belowGram Metres

Figure 11. Gram Metre calculation, interpolated shells at the top, medial plane evaluation belowA finer mesh on the medial plane will of course give you a smoother result on this version of the display at slower processing speeds and larger file size, eg;

Figure 12. Medial plane of the vein coloured by GM values but on a 0.5m mesh rather than the 2m mesh shown previously.LF GeoThis is possible in LFGeo, the trouble is that the vein thickness evaluation is not accessible like it is in LFMining - ie. you cannot export the native thickness evaluation from the vein. This is nothing intentional, just an oversight and outcome of LFGeo being relatively new software, hopefully this will be rectified soon. To make this work with LFG1.4, you have to create your own thickness evaluation using a distance interpolant. See below for the process as supplied by Tim Shurr (ARANZ Geo).

1. Model your vein as a Geological Model using the standard LFGeo vein modelling workflow below the vein is displayed coloured by the thickness variable.

2. Extract the surface for the vein Hangingwall into the meshes folder

3. Create a distance interpolant from the HW surface by right clicking on the interpolants folder and selecting New Distance Function.

4. Extract the Vein Footwall vertices and then evaluate the new HW distance interpolant against the FW points.

5. Then export the FW vertices including the evaluation to csv, then re-import the data into the locations folder

6. Interpolate the FW to HW distance values, using the boundary of the original vein in the geological model as the constraint. Again be aware of the number of vertices you have a fine mesh on your geological model will create a very large number of points, in LFGeo however you can downsample the points if needed using the query "id % 10 = 0".

From here you just follow the procedure from LFMining. ie. Run the grade interpolant, evaluate against the assay/composited points, then go to the spreadsheet program to perform the multiplication of Grade & Thickness. Bring it back into LFGeo and build your grade-thickness model. The only variation from the procedure is you cannot create a medial plane in Geo, to do this you simply create a new mesh, selecting the new mid points from the drillhole file you import into the location data file. These points should sit at the midpoint in the vein (being an average if the intersection) so can be used to create the surface you can evaluate against. You also need to export the assay/composite file to a csv and import them into the locations folder you cannot currently extract this data directly to the locations folder (another oversight). Also be aware that LFGeo does not show you what evaluations you have done to see these you must load the wireframe into the view not sure why ARANZ Geo decided to remove this functionality perhaps an oversight also.As an aside, for horizontal / flat volumes, LFGeo has a workflow for creating thickness grids. These are done at export, using fixed meshes in the Meshes folder. The result is a horizontal 2D grid of the thickness.

Happy modelling

You are not allowed to post comments.

Orefind Blog

Disclaimer: The views expressed in the Orefind blog posts do not represent the views of any hardware or software manufacturers, or their staff. The opinions and recommendations expressed are completely independent, and are based on the practical experience of Orefind staff. Orefind does not have any referral or financial arrangements with companies that are mentioned in our blog posts.Basic grade interpolation in LeapfrogRon Reid

Jun 27, 2013

In my last post I mentioned that I composite for basic geostatistical reasons. Recently I observed a Leapfrog grade interpolation run on raw gold assays, using a linear variogram, and the result was awful to say the least and in fact was completely wrong by any measure. From a geostatistical point of view a number of rules were broken, it is not the purpose of this article to go into these in detail, but rather show how a simple application of some basic rules of thumb will result in a much more robust grade model. Here I will cover the basics of the database, compositing, applying a top cut, approximating a variogram and the basics of finding a "natural" cut for your first grade shell in order to define a grade domain to contain your model.

Note that in the forthcoming discussion I refer largely to processes in Leapfrog Mining, it being a more powerful and useful tool than Leapfrog Geo in its current form, however if you are a Geo user the following still applies the workflow may be slightly different.The databaseAs in all things the GIGO principle, "garbage in, garbage out", applies in Leapfrog. If your database has not been properly cleaned and validated you will get erroneous results. I have noticed that many LF users will load a drill hole database and not fix the errors flagged by Leapfrog. The most common issue is retaining the below detection values as negative values such as -0.01 for below detection gold for instance. If this is left in the database then the interpolation will use this value as an assay and it will lead to errors in the interpolation model. It is better to flag it as a below detection sample and instruct Leapfrog in how to treat these. Where the database uses values such as -9999 for lost sample, or -5555 for insufficient sample you will get a spectacular fail when you attempt to model this (yes it does happen!). If you only have a few errors it is simple enough to add a few special values through the fix errors option to correct these issues (Figure 1). If you have a large number of errors the fastest way to fix the errors is to load the data once, export the errors in order to identify them and then build a special values table that records each error. This is fairly simple to do and should be laid out as shown in Figure 2, you save this as a csv into the same folder as your drill hole data. This file can then be used for every LF Mining project you build as long as your field names do not change, or the particular assays do not vary, although it is not too big a job to adjust the table if need be. You then delete your database from your project and reload it, selecting the special values table at the same time (Figure 3), the database will then load with the assay issue fixed. If you are a Leapfrog Geo user you cannot do this as the special values option has been removed, you have to manually correct and validate as assessed every error flagged (a process that can become quite tedious in a large project, and frustrating; Figure 4). Once your assay table has been validated you can move on; technically your whole database should be validated but I will take it as a given that the process has been completed, most people understand issues around drill holes with incorrect coordinates or drill holes that do a right hand bend due to poor quality survey data.

Figure 1. Fixing a simple series of errors in Leapfrog Mining is a simple process as the file can be adjusted to correct errors. In this case I have 2 errors, a series of Xs that represent insufficient sample, and -0.01 which is below detection. I can fix these using the Add Special Assay value option and selecting Not Sampled or Below Detection.

Figure 2. With Leapfrog Mining you can create a Special Values Table that can be loaded at the Database import stage, the Special Values Table should be structured as above.

Figure 3. Top image shows where you can load the Special Values table (blue arrow), this can only be loaded at the time of loading the database, it cannot be added after the database has been loaded.

Figure 4. Leapfrog Geo does not have a facility to import Special Assay Values, you must manually correct the errors every time you create a new project, once the rules have been decided you must tick the These rules have been reviewed option to get rid of the red cross.Composite your data!I have not yet come across a drill hole database that consists of regular 1, 2 or 3 metre sampling, there is always a spread of sample lengths, occasionally due to sampling on geological boundaries, through to bulk background composites and un-sampled lengths. This leads to a large variation in what is termed support length (Figure 5). It is also common for there to be a correlation between sample length and grade, ie smaller sample lengths where grade is higher (Figure 5). This can lead to problems with the estimation process that is well understood in Geostats, perhaps less well understood outside of the resource geology world. Leapfrog's estimation is basically a method of kriging, and so is subject to all the foibles of any kriged estimate, these include issues of excessive smoothing and grade blow outs in poorly controlled areas. Leapfrog has a basic blog article about how leapfrogs modeling method works on their website. Having multiple small high grade intervals and fewer larger low grade intervals will cause the high grade to be spread around (share and share alike!).

A simple way of dealing with this is to composite your data. Compositing has a dual effect, it regularises your sample support and also acts as a first step in reducing your sample variance (Table 1). It effectively "top cuts" your data by diluting the very high grades. Now there are two sides to the top cut-composite order fence. Those who swear you top cut first, those that say neigh, top cut is always applied after the composite. I am going to declare a conflict of interest in that I sit in the TC post composite camp, for pure practical reasons as I will explain below.

Figure 5. Graph showing sample interval length with average grade by bin, it is evident that the 1 and 2m intervals have significant grade and should not be split by compositing, bin 4 only has minor grade and bin 6 has no grade, it is probably not a significant issue if these bins are split by compositing. I would probably composite to 4m in this case as the 4m assay data may still be significant even if the number is not high (and I happen to know the dataset is for an open pit with benches on this order), 2m would also be a possibility that would not be incorrect.

With respect to the regularising of the sample length, this has a profound effect on the variability of the samples and will also give you a more robust and faster estimate. Selecting a composite length can be as involved as you want to make it however there are a couple of rules of thumb; first your composite length should relate to the type of deposit you have and the ultimate mining method, a high grade underground mine will require a different more selective sample length to a bulk low grade open pit operation for example. The other rule of thumb is that you should not "split" samples, ie if most of your samples are 2m, selecting a 1m or even a 5m composite will split a lot of samples, spreading the same grade above and below a composite boundary, this gives you a dataset with drastically lower variance than reality (which translates as a very low nugget in the variogram), and results in a poor estimate. If you have 2m samples you should composite at 2, 4 or 6m, if you have quite a few 4m samples then this should be pushed out to 8m if 4m is determined to be too small, the composite should always be a multiple of those below it. This must be balanced against the original intent of the model and practicality, it is no good using 8m composites if your orebody is only 6m wide, and the longer the composite the smoother the estimate and you are creating the same issue you are trying to avoid by not splitting samples. You will find that there is commonly very little change in the basic statistics once you get past 4-6m, and implies that there is no real reason to go larger from a purely stats point of view, there may be however from a practical point of view.

For the sake of the argument here let us assume a 5 metre composite will suit our requirements. Having assessed our raw data we find that we have a data set that has extreme grades that imply the requirement for a top cut of say 25gpt gold (I will stick to gold in this discussion but the principle applies across the board), the question becomes "should I composite pre or post applying the top cut?. Lets say the 1m samples that make up a particular composite are 2.5, 5.8, 1.6, 125.1, and 18gpt. The straight average of this composite would be 30.6 gpt. If I apply the top cut first I would get 2.5, 5.8, 1.6, 25.0 and 18gpt which will composite to 10.58 gpt gold. If I apply the top cut after, my grade for the composite will be 25gpt (given the original composite grade is 30.6 cut to 25). As you can see by applying the top cut first we are potentially wiping a significant amount of metal from the system, also when assessing the dataset post-composite it is sometimes the case that a dataset that required top cutting pre-compositing no longer requires it post, or that a very different top cut is required sometimes a higher one than indicated in the raw dataset. If geological sampling has been done where sample lengths are all over the shop this becomes even more involved as length weighting has to be involved. Besides, it is a simple process to top cut post-compositing in Leapfrog which makes the decision easy (Figure 6). Why do we top cut in the first place you might ask, simply because if we were to use the data with the very high grade (say the 125.1 gpt sample above) we will find that the very high grades will unduly influence the estimate and give you an overly optimistic grade interpolation.

Applying a top cut in leapfrog is a simple process of assessing the data in the histogram (Figure 6); Table 1 shows the statistics for the gold dataset shown in Figure 5 and Figure 6, composited by lengths of 2 and 4m. Statistical purists will say the CV values presented here are too high for a kriged estimate, true from a geostats point of view but this is a real dataset and sometimes we have to play the hand we are dealt, we are not trying to generate a Stock exchange reportable estimate so do not get too caught up in this argument.

Note that you cannot currently create a composite file prior to creating an interpolant in LF Geo, you must create an interpolant and shells first, specifying a composite length, prior to analysing the composited datasets.

Table 1. Statistics of raw data for a WA lode Gold deposit, showing the effects of compositing and applying a top cut to the composite table.

Figure 6. You can pick a simple top cut that stands up to relatively rigorous scrutiny using the graph option when generating the interpolant. A widely used method is to select where the histogram breaks down, this at its most basic is where the histogram starts to get gaps, here it is approximately 25g/t for the 2m dataset on left but 40g/t for the 4m Dataset on right (arrowed in red, lognormal graph is simply for better definition), you enter this value into the Upper Bound field to apply the top cut.The VariogramNever run a grade interpolation using a linear variogram, doing so implies that two samples, no matter how far apart, have a direct linear relationship, which is never true in reality and can lead to some very weird results (Figure 7). A basic understanding of sample relationships is essential when running a grade interpolation. Namely that there is always some form of nugget effect, ie two samples side by side will show some difference, and that as you move the samples further apart the samples lose any relationship to each other so that at some point the samples bear no relationship to each other. In cases where two samples side by side bear no relationship at all we have a phenomenon known as pure nugget, in this case you may as well take an average of the whole dataset as it is neigh impossible to estimate a pure nugget deposit, as many companies have found to their cost.

Figure 7. This figure shows the effect of applying a linear isotropic variogram (blue) and a spheroidal 50% nugget variogram (yellow) to the same dataset, each surface represents a 0.3g/t shell, a significant blow out is evident in the Linear variogram.

Given that one benefit of Leapfrog is its ability to rapidly assess a deposit, it does not make sense to delve deeply into a geostatical study of sample distribution and generate complex variograms, especially given Leapfrogs simplistic variogram tools. However a basic understanding of how a variogram should behave for various deposit types will allow you to approximate the variogram for your dataset. For instance, the nugget value for most deposits (assuming few sample errors) will generally be the same across the world, a porphyry gold deposit will have a nugget somewhere between 10-20% of the total variance (call it 15%), epithermal gold deposits tend to sit in the 30-60% range (call it 40%) and lode gold deposits are commonly in the 50-70% range (call it 60%). Changing the Nugget can have a significant effect on the outcome (Figure 8). Ranges are the inverse to this, generally the smaller the nugget the longer the range, a porphyry deposit for example may have a 450m range, whereas a lode gold deposit may only have a 25m range. The Alpha variable controls the shoulder of the variogram, a higher number will give you a sharper shoulder (Figure 9), this is also related to the deposit, a porphyry deposit might have a shallower shoulder and thus an alpha value of say 3, whereas the lode gold deposit may have a very sharp shoulder and thus an alpha value of 9 maybe more appropriate. The alpha values are also useful if you know your variogram has several structures, if you have several structures a lower alpha number helps approximate this. Beware if you are a leapfrog Geo user this relationship is the reverse changes to the way Leapfrog Geo works means that a LF Geo Alpha 3 = a LF Mining Alpha 9 software engineers just like to keep us on our toes!

Let us say we have a lode gold deposit, we will assume a nugget of 60% of the sill, a range of say 25m and use an alpha value of 9.

Figure 8. Figure showing the effect of varying the nugget value, Top is a straight isotropic linear interpolation (Linear is always a No No), below that is a 0% nugget, then a 30% nugget and finally a 60% nugget.

Figure 9. Figure showing the effect of the Alpha Variable, on the graph on the top is for LF Mining, the graph on the botom is for LF Geo; Note that the variable changes between Mining and Geo so that a higher Alpha variable in Mining (eg 9) is equivalent to a low Alpha in Geo (eg 3).The "natural cut"The next step is to define the natural cut of the data. Sometimes when we run an interpolation we find that the lowest cut-off we use creates a solid box within our domain (Figure 10), this is because there are too many samples at that grade that are unconstrained, ie we are defining a background value. The first step in defining a set of shells from our interpolant is to start with one low grade shell, say 0.2gpt. As we are creating just one shell, after the interpolant has been created the one shell is quite quick to generate. We may find that 0.2 fills our domain so generate a shell of 0.3 and re-run, continue doing this until you find the cut-off where you suddenly switch from filling the domain to defining a grade shell, this is your natural cut-off for your data (Figure 10). You can use this as the first shell in your dataset, simply add several more at relevant cut-offs for assessment and viewing, or you can generate a Grade Domain using this cutoff to constrain an additional interpolation that you can then use to select and evaluate a grid of points, effectively generating a Leapfrog block model.

Figure 10. Figure showing the effect of shells above and below a natural cut-off. Brown = 0.2g/t which is an unconstrained shell, blue=0.3g/t which is the constrained shell and defines the natural cut-off of the dataset.

Following this process outlined above will vastly improve your grade modelling and lead to better interpolations with better outcomes. Note I have not spoken about search ellipses, major, minor or semi minor axis, orientations of grade etc, this is because this is all dependent upon the deposit. Your deposit may require an isotopic search, or some long cigar-shaped search, depending upon the structural, lithological and geochemical controls acting upon the deposit at the time of formation and effects post formation. The average nugget and the range of the variogram will generally conform to what is common to that deposit type around the world. A bit of study and research on the deposit is something that should already have been done as part of the exploration process, adding a quick assessment of common variogram parameters is not an arduous addition to this process. It is not a requirement to understand the intricacies of variogram modelling, nor the maths behind it, but knowing the average nugget percent and range for the deposit type should be an integral part of your investigations, and should inform your Leapfrog Grade interpolations.Happy modelingSelecting a subset of drillholes in LeapfrogRon Reid

Jun 12, 2013

As Leapfrog Mining can handle many different types of files from a large number of General Mining Packages (GMP) it has almost become the defacto program for presentations in the boardroom for some companies. I am commonly asked to set up projects and images for Leapfrog-based presentations and slideshows and this sometimes leads to a requirement for out of the box thinking. Two recent examples were:1. Can you set up a scene that shows all the drilling since the last resource was published, and also shows all of the drilling completed to date on the project.2. Can you assess the assay data from the recent round of metallurgical drilling and compare it against the adjacent resource drilling as we think there is an issue with the Metallurgical assays.To answer both of these questions we can utilise the Query building functionality of Leapfrog Mining but to do this you must have set up your project correctly first. I have a project set-up for each project that pulls data from an automatic CSV export that comes out of the database every night. Updating the Leapfrog database is as easy as using theReload Drillholesoption.

Leapfrog Mining project must be set up correctly first beforequery building functionality can be used effectively

Not only does this update the complete database to the latest data, it ensures that I always have a project that is ready to respond to any query our Executive fires my waywell almost always, they always manage to find something that requires more work! It also updates the geological model and grade interpolants utilising the latest data so that we can evaluate changes to the project on an as they happen basis (although some of my bigger projects take up to 2 3 hours to process so it is not something I can run at short notice). Setting up your project correctly is essential; you must ensure that you import all the attributes that you think will be required in the future. Whilst it is possible in Leapfrog to add columns to interval tables, it is not possible to add them to the collar file, it is therefore well worth adding as many columns as you can to the collar file the first time around. Columns such as DH Type, Project, Purpose, Phase, Status, Date Started, Date Complete and any other category you wish to add should be included as this allows you to generate collar queries that select various drillhole groups.

It enables you to extract two sets of data from the one source in order to assess the different influences and contributions to the overall model that the individual datasets have, ie RC vrs Diamond, Company1 vrs Company2, select the area of influence for a certain distance around a metallurgical hole in order to test its validity and representivity. To do this in Leapfrog you follow the procedure outlined below. Create a query on the collar file by right-clicking on the collar table and selecting New Query Filter.

Select those holes you are interested in, such as a selection of holes representing metallurgical test work are selected or you could select by Hole Name, company, hole_type etc depending upon what fields you have imported with your dataset, you can type the query directly if you understand the syntax;

Or you can use the Query Builder to help;

The build button will search the column and present all the values that relate to that column. If you are feeling adventurous you can utilise the advanced option which allows you to build much more complex queries;

Once you have your query click OK, give the query a name and OK again;

Create a region on the assay table by right-clicking on the assay table and selecting Composite Assays, on the Volume tab you select Inside a Query Filter and select your Query;

I always composite my data, if only to make it easier to read the assays on screen (also for basic statistical reasons) but you can ignore the composite option and extract the raw assays if you wish. Give the region a representative name and click OK. At this point you can accomplish point one I was asked above, can I display the drilling completed since the last resource at the same time as all the drilling so far completed. Below I have loaded the composite file generated using the query filter and a drillhole trace line width of 5, and loaded the assay file with no filter and a drillhole trace line width of 1 and a flat colour (of course you could also colour it by assay).

To answer the second query you need to go further; First you extract the composited points to numeric data by right-clicking on the composite file and selecting Extract points Assay points. From this numeric data you can assess the grade distribution, model grade shells (if enough data exists) and generate distance buffers around the selected drilling in order to select areas of influence. The process of creating a distance buffer to select data is quite simple, first you create a distance buffer around the extracted composite or assay file;

Then generate a domain using that distance buffer at the distance you want to study eg;

You could select a mesh surface here instead of the distance interpolant, I always use the interpolants rather than the meshes as it makes the processing quicker and making adjustments to the domain is as simple as adjusting the threshold.

Below I have generated a distance buffer around some met holes in order to run the comparisons between the met holes and other samples within a certain distance from the Met drilling.

Once you have the distance buffer domain you can select out the data within this domain for analysis (you are presented with two options to use the interpolant or use the mesh, again I always use the interpolant).

To answer the 2nd query you need to extract two lots of data, in this case the Metallurgical data only inside the domain, and then the not metallurgical Data that also sits inside the domain. The metallurgical data you already have through generating the domain to start with, to get the second non metallurgical set you make a copy of the metallurgy query and change the query to is not, for example if you had the query collar.purpose = Metallurgy the query would become collar.purpose != Metallurgy. Then you create a composite file using this query and extract these assay points in addition to the original metallurgical set. Here I have demonstrated using a single drillhole but the principle still holds.

The final step is to use your domain to select the relevant subset of the Not Metallurgy composite data and you now have two datasets that are spacially related to each other. With the two sets of data it is now possible to do some basic analysis using the properties tab in Leapfrog, or you can export the data to csv files and do the statistical test in whatever flavour of spreadsheet or stats package is your thing.


Recommended