Search results for: volume fraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3318

Search results for: volume fraction

2448 Multi-Index Performance Investigation of Rubberized Reclaimed Asphalt Mixture

Authors: Ling Xu, Giuseppe Loprencipe, Antonio D'Andrea

Abstract:

Asphalt pavement with recycled and sustainable materials has become the most commonly adopted strategy for road construction, including reclaimed asphalt pavement (RAP) and crumb rubber (CR) from waste tires. However, the adhesion and cohesion characteristics of rubberized reclaimed asphalt pavement were still ambiguous, resulting in deteriorated adhesion behavior and life performance. This research investigated the effect of bonding characteristics on rutting resistance and moisture susceptibility of rubberized reclaimed asphalt pavement in terms of two RAP sources with different oxidation levels and two tire rubber with different particle sizes. Firstly, the binder bond strength (BBS) test and bonding failure distinguishment were conducted to analyze the surface behaviors of binder-aggregate interaction. Then, the compatibility and penetration grade of rubberized RAP binder were evaluated by rotational viscosity test and penetration test, respectively. Hamburg wheel track (HWT) test with high-temperature viscoelastic deformation analysis was adopted, which illustrated the rutting resistance. Additionally, a water boiling test was employed to evaluate the moisture susceptibility of the mixture and the texture features were characterized with the statistical parameters of image colors. Finally, the colloid structure model of rubberized RAP binder with surface interaction was proposed, and statistical analysis was established to release the correlation among various indexes. This study concluded that the gel-phase colloid structure and molecular diffusion of the free light fraction would affect the surface interpretation with aggregate, determining the bonding characteristic of rubberized RAP asphalt.

Keywords: bonding characteristics, reclaimed asphalt pavement, rubberized asphalt, sustainable material

Procedia PDF Downloads 50
2447 Support for and Participation in 'Spontaneous' Mass Protest in Iceland: The Moderating Effects of Biographical Availability, Critical Mass, and Social Embeddedness

Authors: Jon Gunnar Bernburg

Abstract:

The present study addresses a topic that is fundamental to social movement theory, namely, the contingent link between movement support and movement participation. Usually, only a small fraction of those who agree with the cause of a social movement is mobilized into participating in it (a pattern sometimes referred to as 'the collective action problem'). However, historical moments sometimes emerge when many supporters become mobilized to participate in the movement, greatly enhancing the chance of movement success. By studying a case in point, this paper addresses the limited work on how support and participation are related at such critical moments. Specifically, the paper examines the association between supporting and participating in a huge 'pro-democracy' protest in Iceland in April 2016, in the wake of the global Panama Papers scandal. Organized via social media by only a handful of activists, but supported by a majority of Icelanders, the protest attracted about a fourth of the urban population, leading to a snap election and government change. Surveying Iceland’s urban population, this paper tests hypotheses about the processes mobilizing supporters to participate in the protest. The findings reveal how variables derived from the theories of biographical availability (males vs. females, working class vs. professionals), critical mass (expectations, prior protest success), and social embeddedness (close ties with protesters) moderate the association between protest support and participation. The study helps to account for one of the largest protests in Iceland’s history while contributing to the theory about how historical contexts shape the behavior of movement supporters.

Keywords: Iceland, crisis, protest support vs. participation, theories of mass mobilization

Procedia PDF Downloads 224
2446 Identification of Clay Mineral for Determining Reservoir Maturity Levels Based on Petrographic Analysis, X-Ray Diffraction and Porosity Test on Penosogan Formation Karangsambung Sub-District Kebumen Regency Central Java

Authors: Ayu Dwi Hardiyanti, Bernardus Anggit Winahyu, I. Gusti Agung Ayu Sugita Sari, Lestari Sutra Simamora, I. Wayan Warmada

Abstract:

The Penosogan Formation sandstone, that has Middle Miosen age, has been deemed as a reservoir potential based on sample data from sandstone outcrop in Kebakalan and Kedawung villages, Karangsambung sub-district, Kebumen Regency, Central Java. This research employs the following analytical methods; petrography, X-ray diffraction (XRD), and porosity test. Based on the presence of micritic sandstone, muddy micrite, and muddy sandstone, the Penosogan Formation sandstone has a fine-coarse granular size and middle-to-fine sorting. The composition of the sandstone is mostly made up of plagioclase, skeletal grain, and traces of micrite. The percentage of clay minerals based on petrographic analysis is 10% and appears to envelop grain, resulting enveloping grain which reduces the porosity of rocks. The porosity types as follows: interparticle, vuggy, channel, and shelter, with an equant form of cement. Moreover, the diagenesis process involves compaction, cementation, authigenic mineral growth, and dissolving due to feldspar alteration. The maturity of the reservoir can be seen through the X-ray diffraction analysis results, using ethylene glycol solution for clay minerals fraction transformed from smectite–illite. Porosity test analysis showed that the Penosogan Formation sandstones has a porosity value of 22% based on the Koeseomadinata classification, 1980. That shows high maturity is very influential for the quality of reservoirs sandstone of the Penosogan Formation.

Keywords: sandstone reservoir, Penosogan Formation, smectite, XRD

Procedia PDF Downloads 163
2445 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 160
2444 Producing of Amorphous-Nanocrystalline Composite Powders

Authors: K. Tomolya, D. Janovszky, A. Sycheva, M. Sveda, A. Roosz

Abstract:

CuZrAl amorphous alloys have attracted high interest due to unique physical and mechanical properties, which can be enhanced by adding of Ni and Ti elements. It is known that this properties can be enhanced by crystallization of amorphous alloys creating nanocrystallines in the matrix. The present work intends to produce nanosized crystalline parti-cle reinforced amorphous matrix composite powders by crystallization of amorphous powders. As the first step the amorphous powders were synthe-tized by ball-milling of crystalline powders. (Cu49Zr45Al6) 80Ni10Ti10 and (Cu49Zr44Al7) 80Ni10Ti10 (at%) alloys were ball-milled for 12 hours in order to reach the fully amorphous structure. The impact en-ergy of the balls during milling causes the change of the structure in the powders. Scanning electron microscopical (SEM) images shows that the phases mixed first and then changed into a fully amorphous matrix. Furthermore, nanosized particles in the amorphous matrix were crystallized by heat treatment of the amorphous powders that was confirmed by TEM measurement. It was of importance to define the tem-perature when the amorphous phase starts to crystal-lize. Amorphous alloys have a special heating curve and characteristic temperatures, which can be meas-ured by differential scanning calorimetry (DSC). A typical DSC curve of an amorphous alloy exhibits an endothermic event characteristic of the equilibrium glass transition (Tg) and a distinct undercooled liquid region, followed by one or two exothermic events corresponding to crystallization processes (Tp). After measuring the DSC traces of the amorphous powders, the annealing temperatures should be determined between Tx and Tp. In our experiments several temperatures from the annealing temperature range were selected and de-pendency of crystallized nanoparticles fraction on their hardness was investigated.

Keywords: amorphous structure, composite, mechanical milling, powder, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), transmission electronmocroscopy (TEM)

Procedia PDF Downloads 435
2443 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: liquefaction, Quercus cerris, polyalcohol liquefaction, temperature

Procedia PDF Downloads 325
2442 The Potential Fresh Water Resources of Georgia and Sustainable Water Management

Authors: Nana Bolashvili, Vakhtang Geladze, Tamazi Karalashvili, Nino Machavariani, George Geladze, Davit Kartvelishvili, Ana Karalashvili

Abstract:

Fresh water is the major natural resource of Georgia. The average perennial sum of the rivers' runoff in Georgia is 52,77 km³, out of which 9,30 km³ inflows from abroad. The major volume of transit river runoff is ascribed to the Chorokhi river. Average perennial runoff in Western Georgia is 41,52 km³, in Eastern Georgia 11,25 km³. The indices of Eastern and Western Georgia were calculated with 50% and 90% river runoff respectively, while the same index calculation for other countries is based on a 50% river runoff. Out of total volume of resources, 133,2 m³/sec (4,21 km³) has been geologically prospected by the State Commission on Reserves and Acknowledged as reserves available for exploitation, 48% (2,02 km³) of which is in Western Georgia and 2,19 km³ in Eastern Georgia. Considering acknowledged water reserves of all categories per capita water resources accounts to 2,2 m³/day, whereas high industrial category -0. 88 m³ /day fresh drinking water. According to accepted norms, the possibility of using underground water reserves is 2,5 times higher than the long-term requirements of the country. The volume of abundant fresh-water reserves in Georgia is about 150 m³/sec (4,74 km³). Water in Georgia is consumed mostly in agriculture for irrigation purposes. It makes 66,4% around Georgia, in Eastern Georgia 72,4% and 38% in Western Georgia. According to the long-term forecast provision of population and the territory with water resources in Eastern Georgia will be quite normal. A bit different is the situation in the lower reaches of the Khrami and Iori rivers which could be easily overcome by corresponding financing. The present day irrigation system in Georgia does not meet the modern technical requirements. The overall efficiency of their majority varies between 0,4-0,6. Similar is the situation in the fresh water and public service water consumption. Organization of the mentioned systems, installation of water meters, introduction of new methods of irrigation without water loss will substantially increase efficiency of water use. Besides new irrigation norms developed from agro-climatic, geographical and hydrological angle will significantly reduce water waste. Taking all this into account we assume that for irrigation agricultural lands in Georgia is necessary 6,0 km³ water, 5,5 km³ of which goes to Eastern Georgia on irrigation arable areas. To increase water supply in Eastern Georgian territory and its population is possible by means of new water reservoirs as the runoff of every river considerably exceeds the consumption volume. In conclusion, we should say that fresh water resources by which Georgia is that rich could be significant source for barter exchange and investment attraction. Certain volume of fresh water can be exported from Western Georgia quite trouble free, without bringing any damage to population and hydroecosystems. The precise volume of exported water per region/time and method/place of water consumption should be defined after the estimation of different hydroecosystems and detailed analyses of water balance of the corresponding territories.

Keywords: GIS, management, rivers, water resources

Procedia PDF Downloads 358
2441 The Impact of Ultrasonicator on the Vertical and Horizontal Mixing Profile of Petrol-Bioethanol

Authors: D. Nkazi, S. E. Iyuke, J. Mulopo

Abstract:

Increasing global energy demand as well as air quality concerns have in recent years led to the search for alternative clean fuels to replace fossil fuels. One such alternative is the blending of petrol with ethanol, which has numerous advantages such ethanol’s ability to act as oxygenate thus reducing the carbon monoxide emissions from the exhaust of internal combustion engines of vehicles. However, the hygroscopic nature of ethanol is a major concern in obtaining a perfectly homogenized petrol-ethanol fuel. This problem has led to the study of ways of homogenizing the petrol-ethanol mixtures. During the blending process, volumes fraction of ethanol and petrol were studied with respect to the depth within the storage container to confirm homogenization of the blend and time of storage. The results reveal that the density of the mixture was constant. The binodal curve of the ternary diagram shows an increase of homogeneous region, indicating an improved of interaction between water and petrol. The concentration distribution in the reactor showed proof of cavitation formation since in both directions, the variation of concentration with both time and distance was found to be oscillatory. On comparing the profiles in both directions, the concentration gradient, diffusion flux, and energy and diffusion rates were found to be higher in the vertical direction compared to the horizontal direction. It was therefore concluded that ultrasonication creates cavitation in the mixture which enhances mass transfer and mixing of ethanol and petrol. The horizontal direction was found to be the diffusion rate limiting step which proposed that the blender should have a larger height to diameter ratio. It is, however, recommended that further studies be done on the rate-limiting step so as to have actual dimensions of the reactor.

Keywords: ultrasonication, petrol, ethanol, concentration

Procedia PDF Downloads 356
2440 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 120
2439 Distribution of Traffic Volume at Fuel Station during Peak Hour Period on Arterial Road

Authors: Surachai Ampawasuvan, Supornchai Utainarumol

Abstract:

Most of fuel station’ customers, who drive on the major arterial road wants to use the stations to fill fuel to their vehicle during their journey to destinations. According to the survey of traffic volume of the vehicle using fuel stations by video cameras, automatic counting tools, or questionnaires, it was found that most users prefer to use fuel stations on holiday rather than on working day. They also prefer to use fuel stations in the morning rather than in the evening. When comparing the ratio of the distribution pattern of traffic volume of the vehicle using fuel stations by video cameras, automatic counting tools, there is no significant difference. However, when comparing the ratio of peak hour (peak hour rate) of the results from questionnaires at 13 to 14 percent with the results obtained by using the methods of the Institute of Transportation Engineering (ITE), it is found that the value is similar. However, it is different from a survey by video camera and automatic traffic counting at 6 to 7 percent of about half. So, this study suggests that in order to forecast trip generation of vehicle using fuel stations on major arterial road which is mostly characterized by Though Traffic, it is recommended to use the value of half of peak hour rate, which would make the forecast for trips generation to be more precise and accurate and compatible to surrounding environment.

Keywords: peak rate, trips generation, fuel station, arterial road

Procedia PDF Downloads 388
2438 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems

Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme

Abstract:

Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.

Keywords: motion capture, cameras, biomechanics, gait analysis

Procedia PDF Downloads 302
2437 Hounsfield-Based Automatic Evaluation of Volumetric Breast Density on Radiotherapy CT-Scans

Authors: E. M. D. Akuoko, Eliana Vasquez Osorio, Marcel Van Herk, Marianne Aznar

Abstract:

Radiotherapy is an integral part of treatment for many patients with breast cancer. However, side effects can occur, e.g., fibrosis or erythema. If patients at higher risks of radiation-induced side effects could be identified before treatment, they could be given more individual information about the risks and benefits of radiotherapy. We hypothesize that breast density is correlated with the risk of side effects and present a novel method for automatic evaluation based on radiotherapy planning CT scans. Methods: 799 supine CT scans of breast radiotherapy patients were available from the REQUITE dataset. The methodology was first established in a subset of 114 patients (cohort 1) before being applied to the whole dataset (cohort 2). All patients were scanned in the supine position, with arms up, and the treated breast (ipsilateral) was identified. Manual experts contour available in 96 patients for both the ipsilateral and contralateral breast in cohort 1. Breast tissue was segmented using atlas-based automatic contouring software, ADMIRE® v3.4 (Elekta AB, Sweden). Once validated, the automatic segmentation method was applied to cohort 2. Breast density was then investigated by thresholding voxels within the contours, using Otsu threshold and pixel intensity ranges based on Hounsfield units (-200 to -100 for fatty tissue, and -99 to +100 for fibro-glandular tissue). Volumetric breast density (VBD) was defined as the volume of fibro-glandular tissue / (volume of fibro-glandular tissue + volume of fatty tissue). A sensitivity analysis was performed to verify whether calculated VBD was affected by the choice of breast contour. In addition, we investigated the correlation between volumetric breast density (VBD) and patient age and breast size. VBD values were compared between ipsilateral and contralateral breast contours. Results: Estimated VBD values were 0.40 (range 0.17-0.91) in cohort 1, and 0.43 (0.096-0.99) in cohort 2. We observed ipsilateral breasts to be denser than contralateral breasts. Breast density was negatively associated with breast volume (Spearman: R=-0.5, p-value < 2.2e-16) and age (Spearman: R=-0.24, p-value = 4.6e-10). Conclusion: VBD estimates could be obtained automatically on a large CT dataset. Patients’ age or breast volume may not be the only variables that explain breast density. Future work will focus on assessing the usefulness of VBD as a predictive variable for radiation-induced side effects.

Keywords: breast cancer, automatic image segmentation, radiotherapy, big data, breast density, medical imaging

Procedia PDF Downloads 125
2436 Microstructure Analysis of TI-6AL-4V Friction Stir Welded Joints

Authors: P. Leo, E. Cerri, L. Fratini, G. Buffa

Abstract:

The Friction Stir Welding process uses an inert rotating mandrel and a force on the mandrel normal to the plane of the sheets to generate the frictional heat. The heat and the stirring action of the mandrel create a bond between the two sheets without melting the base metal. As matter of fact, the use of a solid state welding process limits the insurgence of defects, due to the presence of gas in melting bath, and avoids the negative effects of materials metallurgical transformation strictly connected with the change of phase. The industrial importance of Ti-6Al-4V alloy is well known. It provides an exceptional good balance of strength, ductility, fatigue and fracture properties together with good corrosion resistance and good metallurgical stability. In this paper, the authors analyze the microstructure of friction stir welded joints of Ti-6Al-4V processed at the same travel speed (35 mm/min) but at different rotation speeds (300-500 rpm). The microstructure of base material (BM), as result from both optical microscope and scanning electron microscope analysis is not homogenous. It is characterized by distorted α/β lamellar microstructure together with smashed zone of fragmented β layer and β retained grain boundary phase. The BM has been welded in the-as received state, without any previous heat treatment. Even the microstructure of the transverse and longitudinal sections of joints is not homogeneous. Close to the top of weld cross sections a much finer microstructure than the initial condition has been observed, while in the center of the joints the microstructure is less refined. Along longitudinal sections, the microstructure is characterized by equiaxed grains and lamellae. Both the length and area fraction of lamellas increases with distance from longitudinal axis. The hardness of joints is higher than that of BM. As the process temperature increases the average microhardness slightly decreases.

Keywords: friction stir welding, microhardness, microstructure, Ti-6Al-4V

Procedia PDF Downloads 371
2435 Defect Correlation of Computed Tomography and Serial Sectioning in Additively Manufactured Ti-6Al-4V

Authors: Bryce R. Jolley, Michael Uchic

Abstract:

This study presents initial results toward the correlative characterization of inherent defects of Ti-6Al-4V additive manufacture (AM). X-Ray Computed Tomography (CT) defect data are compared and correlated with microscopic photographs obtained via automated serial sectioning. The metal AM specimen was manufactured out of Ti-6Al-4V virgin powder to specified dimensions. A post-contour was applied during the fabrication process with a speed of 1050 mm/s, power of 260 W, and a width of 140 µm. The specimen was stress relief heat-treated at 16°F for 3 hours. Microfocus CT imaging was accomplished on the specimen within a predetermined region of the build. Microfocus CT imaging was conducted with parameters optimized for Ti-6Al-4V additive manufacture. After CT imaging, a modified RoboMet. 3D version 2 was employed for serial sectioning and optical microscopy characterization of the same predetermined region. Automated montage capture with sub-micron resolution, bright-field reflection, 12-bit monochrome optical images were performed in an automated fashion. These optical images were post-processed to produce 2D and 3D data sets. This processing included thresholding and segmentation to improve visualization of defect features. The defects observed from optical imaging were compared and correlated with the defects observed from CT imaging over the same predetermined region of the specimen. Quantitative results of area fraction and equivalent pore diameters obtained via each method are presented for this correlation. It is shown that Microfocus CT imaging does not capture all inherent defects within this Ti-6Al-4V AM sample. Best practices for this correlative effort are also presented as well as the future direction of research resultant from this current study.

Keywords: additive manufacture, automated serial sectioning, computed tomography, nondestructive evaluation

Procedia PDF Downloads 132
2434 Characterization of Tailings From Traditional Panning of Alluvial Gold Ore (A Case Study of Ilesa - Southwestern Nigeria Goldfield Tailings Dumps)

Authors: Olaniyi Awe, Adelana R. Adetunji, Abraham Adeleke

Abstract:

Field observation revealed a lot of artisanal gold mining activities in Ilesa gold belt of southwestern Nigeria. The possibility of alluvial and lode gold deposits in commercial quantities around this location is very high, as there are many resident artisanal gold miners who have been mining and trading alluvial gold ore for decades and to date in the area. Their major process of solid gold recovery from its ore is by gravity concentration using the convectional panning method. This method is simple to learn and fast to recover gold from its alluvial ore, but its effectiveness is based on rules of thumb and the artisanal miners' experience in handling gold ore panning tool while processing the ore. Research samples from five alluvial gold ore tailings dumps were collected and studied. Samples were subjected to particle size analysis and mineralogical and elemental characterization using X-Ray Diffraction (XRD) and Particle-Induced X-ray Emission (PIXE) methods, respectively. The results showed that the tailings were of major quartz in association with albite, plagioclase, mica, gold, calcite and sulphide minerals. The elemental composition analysis revealed a 15ppm of gold concentration in particle size fraction of -90 microns in one of the tailings dumps investigated. These results are significant. It is recommended that heaps of panning tailings should be further reprocessed using other gold recovery methods such as shaking tables, flotation and controlled cyanidation that can efficiently recover fine gold particles that were previously lost into the gold panning tailings. The tailings site should also be well controlled and monitored so that these heavy minerals do not find their way into surrounding water streams and rivers, thereby causing health hazards.

Keywords: gold ore, panning, PIXE, tailings, XRD

Procedia PDF Downloads 75
2433 An Experimental Study to Control Single Droplet by Actuating Waveform with Preliminary and Suppressing Vibration

Authors: Oke Oktavianty, Tadayuki Kyoutani, Shigeyuki Haruyama, Ken Kaminishi

Abstract:

For advancing the experiment system standard of Inkjet printer that is being developed, the actual natural period, fire limitation number in droplet weight measurement and observation distance in droplet velocity measurement was investigated. In another side, the study to control the droplet volume in inkjet printer with negative actuating waveform method is still limited. Therefore, the effect of negative waveform with preliminary and suppressing vibration addition on the droplet formation process, droplet shape, volume and velocity were evaluated. The different voltage and print-head temperature were exerted to obtain the optimum preliminary and suppressing vibration. The mechanism of different phenomenon from each waveform was also discussed.

Keywords: inkjet printer, DoD, waveform, preliminary and suppressing vibration

Procedia PDF Downloads 232
2432 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model

Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu

Abstract:

The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.

Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model

Procedia PDF Downloads 303
2431 3D Numerical Modelling of a Pulsed Pumping Process of a Large Dense Non-Aqueous Phase Liquid Pool: In situ Pilot-Scale Case Study of Hexachlorobutadiene in a Keyed Enclosure

Authors: Q. Giraud, J. Gonçalvès, B. Paris

Abstract:

Remediation of dense non-aqueous phase liquids (DNAPLs) represents a challenging issue because of their persistent behaviour in the environment. This pilot-scale study investigates, by means of in situ experiments and numerical modelling, the feasibility of the pulsed pumping process of a large amount of a DNAPL in an alluvial aquifer. The main compound of the DNAPL is hexachlorobutadiene, an emerging organic pollutant. A low-permeability keyed enclosure was built at the location of the DNAPL source zone in order to isolate a finite undisturbed volume of soil, and a 3-month pulsed pumping process was applied inside the enclosure to exclusively extract the DNAPL. The water/DNAPL interface elevation at both the pumping and observation wells and the cumulated pumped volume of DNAPL were also recorded. A total volume of about 20m³ of purely DNAPL was recovered since no water was extracted during the process. The three-dimensional and multiphase flow simulator TMVOC was used, and a conceptual model was elaborated and generated with the pre/post-processing tool mView. Numerical model consisted of 10 layers of variable thickness and 5060 grid cells. Numerical simulations reproduce the pulsed pumping process and show an excellent match between simulated, and field data of DNAPL cumulated pumped volume and a reasonable agreement between modelled and observed data for the evolution of the water/DNAPL interface elevations at the two wells. This study offers a new perspective in remediation since DNAPL pumping system optimisation may be performed where a large amount of DNAPL is encountered.

Keywords: dense non-aqueous phase liquid (DNAPL), hexachlorobutadiene, in situ pulsed pumping, multiphase flow, numerical modelling, porous media

Procedia PDF Downloads 169
2430 Effects of Climate Change and Land Use, Land Cover Change on Atmospheric Mercury

Authors: Shiliang Wu, Huanxin Zhang

Abstract:

Mercury has been well-known for its negative effects on wildlife, public health as well as the ecosystem. Once emitted into atmosphere, mercury can be transformed into different forms or enter the ecosystem through dry deposition or wet deposition. Some fraction of the mercury will be reemitted back into the atmosphere and be subject to the same cycle. In addition, the relatively long lifetime of elemental mercury in the atmosphere enables it to be transported long distances from source regions to receptor regions. Global change such as climate change and land use/land cover change impose significant challenges for mercury pollution control besides the efforts to regulate mercury anthropogenic emissions. In this study, we use a global chemical transport model (GEOS-Chem) to examine the potential impacts from changes in climate and land use/land cover on the global budget of mercury as well as its atmospheric transport, chemical transformation, and deposition. We carry out a suite of sensitivity model simulations to separate the impacts on atmospheric mercury associated with changes in climate and land use/land cover. Both climate change and land use/land cover change are found to have significant impacts on global mercury budget but through different pathways. Land use/land cover change primarily increase mercury dry deposition in northern mid-latitudes over continental regions and central Africa. Climate change enhances the mobilization of mercury from soil and ocean reservoir to the atmosphere. Also, dry deposition is enhanced over most continental areas while a change in future precipitation dominates the change in mercury wet deposition. We find that 2000-2050 climate change could increase the global atmospheric burden of mercury by 5% and mercury deposition by up to 40% in some regions. Changes in land use and land cover also increase mercury deposition over some continental regions, by up to 40%. The change in the lifetime of atmospheric mercury has important implications for long-range transport of mercury. Our case study shows that changes in climate and land use and cover could significantly affect the source-receptor relationships for mercury.

Keywords: mercury, toxic pollutant, atmospheric transport, deposition, climate change

Procedia PDF Downloads 477
2429 A Study of Impact of Changing Fuel Practices on Organic Carbon and Elemental Carbon Levels in Indoor Air in Two States of India

Authors: Kopal Verma, Umesh C. Kulshrestha

Abstract:

India is a rural major country and majority of rural population is dependent on burning of biomass as fuel for domestic cooking on traditional stoves (Chullahs) and heating purposes. This results into indoor air pollution and ultimately affects health of the residents. Still, a very small fraction of rural population has been benefitted by the facilities of Liquefied Petroleum Gas (LPG) cylinders. Different regions of country follow different methods and use different type of biomass for cooking. So in order to study the differences in cooking practices and resulting indoor air pollution, this study was carried out in two rural areas of India viz. Budhwada, Madhya Pradesh and Baggi, Himachal Pradesh. Both the regions have significant differences in terms of topography, culture and daily practices. Budhwada lies in plain area and Baggi belongs to hilly terrain. The study of carbonaceous aerosols was carried out in four different houses of each village. The residents were asked to bring slight change in their practices by cooking only with biomass (BB) then with a mix of biomass and LPG (BL) and then finally only with LPG (LP). It was found that in BB, average values of organic carbon (OC) and elemental carbon (EC) were 28% and 44% lower in Budhwada than in Baggi whereas a reverse trend was found where OC and EC was respectively more by 56% and 26% with BL and by 54% and 29% with LP in Budhwada than in Baggi. Although, a significant reduction was found both in Budhwada (OC by 49% and EC by 34%) as well as in Baggi (OC by 84% and EC by 73%) when cooking was shifted from BB to LP. The OC/EC ratio was much higher for Budhwada (BB=9.9; BL=2.5; LP=6.1) than for Baggi (BB=1.7; BL=1.6; LP=1.3). The correlation in OC and EC was found to be excellent in Baggi (r²=0.93) and relatively poor in Budhwada (r²=0.65). A questionnaire filled by the residents suggested that they agree to the health benefits of using LPG over biomass burning but the challenges of supply of LPG and changing the prevailing tradition of cooking on Chullah are making it difficult for them to make this shift.

Keywords: biomass burning, elemental carbon, liquefied petroluem gas, organic carbon

Procedia PDF Downloads 180
2428 Assessment of Growth Variation and Phytoextraction Potential of Four Salix Varieties Grown in Zn Contaminated Soil Amended with Lime and Wood Ash

Authors: Mir Md Abdus Salam, Muhammad Mohsin, Pertti Pulkkinen, Paavo Pelkonen, Ari Pappinen

Abstract:

Soils contaminated with metals, e.g., copper (Cu), zinc (Zn) and nickel (Ni) are one of the main global environmental problems. Zn is an important element for plant growth, but excess levels may become a threat to plant survival. Soils polluted with metals may also pose risks and hazards to human health. Afforestation based on short rotation Salix crops may be a good solution for the reduction of metals toxicity levels in the soil and in ecosystem restoration of severely polluted sites. In a greenhouse experiment, plant growth and zinc (Zn) uptake by four Salix cultivars grown in Zn contaminated soils collected from a mining area in Finland were tested to assess their suitability for phytoextraction. The sequential extraction technique and inductively coupled plasma‒mass spectrometry (ICP–MS) were used to determine the extractable metals and evaluate the fraction of metals in the soil that could be potentially available for plant uptake. The cultivars displayed resistance to heavily polluted soils throughout the whole experiment. After uptake, the total mean Zn concentrations ranged from 776 to 1823 mg kg⁻¹. The average uptake percentage of Zn across all cultivars and treatments ranged from 97 to 223%. Lime and wood ash addition showed a significant effect on plant dry biomass growth and metal uptake percentage of Zn in most of the cultivars. The results revealed that Salix cultivars have the potential to accumulate and take up significant amounts of Zn. Ecological restoration of polluted soils could be environmentally favorable in conjunction with economically profitable practices, such as forestry and bioenergy production. As such, the utilization of Salix for phytoextraction and bioenergy purposes is of considerable interest.

Keywords: lime, phytoextraction, Salix, wood ash, zinc

Procedia PDF Downloads 146
2427 An Unbiased Profiling of Immune Repertoire via Sequencing and Analyzing T-Cell Receptor Genes

Authors: Yi-Lin Chen, Sheng-Jou Hung, Tsunglin Liu

Abstract:

Adaptive immune system recognizes a wide range of antigens via expressing a large number of structurally distinct T cell and B cell receptor genes. The distinct receptor genes arise from complex rearrangements called V(D)J recombination, and constitute the immune repertoire. A common method of profiling immune repertoire is via amplifying recombined receptor genes using multiple primers and high-throughput sequencing. This multiplex-PCR approach is efficient; however, the resulting repertoire can be distorted because of primer bias. To eliminate primer bias, 5’ RACE is an alternative amplification approach. However, the application of RACE approach is limited by its low efficiency (i.e., the majority of data are non-regular receptor sequences, e.g., containing intronic segments) and lack of the convenient tool for analysis. We propose a computational tool that can correctly identify non-regular receptor sequences in RACE data via aligning receptor sequences against the whole gene instead of only the exon regions as done in all other tools. Using our tool, the remaining regular data allow for an accurate profiling of immune repertoire. In addition, a RACE approach is improved to yield a higher fraction of regular T-cell receptor sequences. Finally, we quantify the degree of primer bias of a multiplex-PCR approach via comparing it to the RACE approach. The results reveal significant differences in frequency of VJ combination by the two approaches. Together, we provide a new experimental and computation pipeline for an unbiased profiling of immune repertoire. As immune repertoire profiling has many applications, e.g., tracing bacterial and viral infection, detection of T cell lymphoma and minimal residual disease, monitoring cancer immunotherapy, etc., our work should benefit scientists who are interested in the applications.

Keywords: immune repertoire, T-cell receptor, 5' RACE, high-throughput sequencing, sequence alignment

Procedia PDF Downloads 179
2426 Analysis of Extracellular Vesicles Interactomes of two Isoforms of Tau Protein via SHSY-5Y Cell Lines

Authors: Mohammad Aladwan

Abstract:

Alzheimer’s disease (AD) is a widespread dementing illness with a complex and poorly understood etiology. An important role in improving our understanding of the AD process is the modeling of disease-associated changes in tau protein phosphorylation, a protein known to mediate events essential to the onset and progression of AD. A main feature of AD is the abnormal phosphorylation of tau protein and the presence of neurofibrillary tangles. In order to evaluate the respective roles of the microtubule-binding region (MTBR) and alternatively spliced exons in the N-terminal projection domains in AD, we have constructed SHSY-5Y cell lines that stably overexpress four different species of tau protein (4R2N, 4R0N, N(E-2), N(E+2)). Since the toxicity and spreading of tau lesions in AD depends on the interactions of tau with other proteins, we have performed a proteomic analysis of exosome-fraction interactomes for cell lysates and media samples that were isolated from SHSY-5Y cell lines. Functional analysis of tau interactomes based on gene ontology (GO) terms was performed using the String 10.5 database program. The highest number of exosomes proteomes and tau associated proteins were found with 4R2N isoform (2771 and 159) in cell lysate and they have a high strength of connectivity (78%) between proteins, while N(E-2) isoform in the media proteomes has the highest number of proteins and tau associated protein (1829 and 205). Moreover, known AD markers were significantly enriched in secreted interactomes relative to lysate interactomes in the SHSY-5Y cells of tau isoforms lacking exons 2 and 3 in the N-terminal. The lack of exon 2 (E-2) from tau protein can be mediated by tau secretion and spreading to different cells. Enriched functions in the secreted E-2 interactome include signaling and developmental pathways that have been linked to a) tau misprocessing and lesion development and b) tau secretion and which, therefore, could play novel roles in AD pathogenesis.

Keywords: Alzheimer's disease, dementia, tau protein, neurodegenration disease

Procedia PDF Downloads 89
2425 Comparative Study of Bread Prepared with and without Germinated Soyabean (Glycine Max) Flour

Authors: Muhammad Arsalan Mahmoo, Allah Rakha, Muhammad Sohail

Abstract:

The supplementation of wheat flour with high lysine legume flours has positive effects on the nutritional value of bread. In present study, germinated and terminated soya flour blends were prepared and supplemented in bread in variable proportions (10 % and 20 % of each) to check its impact on quality and sensory attributes of bread. The results showed that there was a significant increase in protein, ash and crude fat contents due to increase in the level of germinated and ungerminated soya flour. However, the moisture and crude fiber contents of composite flours containing germinated and ungerminated soya flour decreased with increased level of supplementation. Mean values for physical analysis (loaf volume, specific volume, weight loss and force for texture) were significantly higher in breads prepared with germinated soya bean flour.The scores assigned to sensory parameters of breads like volume, color of crust, symmetry, color of crumb, texture, taste and aroma decreased significantly by increasing the level of germinated and ungerminated soya flour in wheat flour while color of crust and taste slightly improved. The scores given to overall acceptability of bread prepared from composite flour supplemented with 10 % germinated soya flour.

Keywords: composite bread, protein energy malnutrition, supplementation, amino acid profile, grain legumes

Procedia PDF Downloads 418
2424 Using Complete Soil Particle Size Distributions for More Precise Predictions of Soil Physical and Hydraulic Properties

Authors: Habib Khodaverdiloo, Fatemeh Afrasiabi, Farrokh Asadzadeh, Martinus Th. Van Genuchten

Abstract:

The soil particle-size distribution (PSD) is known to affect a broad range of soil physical, mechanical and hydraulic properties. Complete descriptions of a PSD curve should provide more information about these properties as opposed to having only information about soil textural class or the soil sand, silt and clay (SSC) fractions. We compared the accuracy of 19 different models of the cumulative PSD in terms of fitting observed data from a large number of Iranian soils. Parameters of the six most promising models were correlated with measured values of the field saturated hydraulic conductivity (Kfs), the mean weight diameter of soil aggregates (MWD), bulk density (ρb), and porosity (∅). These same soil properties were correlated also with conventional PSD parameters (SSC fractions), selected geometric PSD parameters (notably the mean diameter dg and its standard deviation σg), and several other PSD parameters (D50 and D60). The objective was to find the best predictions of several soil physical quality indices and the soil hydraulic properties. Neither SSC nor dg, σg, D50 and D60 were found to have a significant correlation with both Kfs or logKfs, However, the parameters of several cumulative PSD models showed statistically significant correlation with Kfs and/or logKfs (|r| = 0.42 to 0.65; p ≤ 0.05). The correlation between MWD and the model parameters was generally also higher than either with SSC fraction and dg, or with D50 and D60. Porosity (∅) and the bulk density (ρb) also showed significant correlation with several PSD model parameters, with ρb additionally correlating significantly with various geometric (dg), mechanical (D50 and D60), and agronomic (clay and sand) representations of the PSD. The fitted parameters of selected PSD models furthermore showed statistically significant correlations with Kfs,, MWD and soil porosity, which may be viewed as soil quality indices. Results of this study are promising for developing more accurate pedotransfer functions.

Keywords: particle size distribution, soil texture, hydraulic conductivity, pedotransfer functions

Procedia PDF Downloads 265
2423 Numerical Simulation of Two-Dimensional Flow over a Stationary Circular Cylinder Using Feedback Forcing Scheme Based Immersed Boundary Finite Volume Method

Authors: Ranjith Maniyeri, Ahamed C. Saleel

Abstract:

Two-dimensional fluid flow over a stationary circular cylinder is one of the bench mark problem in the field of fluid-structure interaction in computational fluid dynamics (CFD). Motivated by this, in the present work, a two-dimensional computational model is developed using an improved version of immersed boundary method which combines the feedback forcing scheme of the virtual boundary method with Peskin’s regularized delta function approach. Lagrangian coordinates are used to represent the cylinder and Eulerian coordinates are used to describe the fluid flow. A two-dimensional Dirac delta function is used to transfer the quantities between the sold to fluid domain. Further, continuity and momentum equations governing the fluid flow are solved using fractional step based finite volume method on a staggered Cartesian grid system. The developed code is validated by comparing the values of drag coefficient obtained for different Reynolds numbers with that of other researcher’s results. Also, through numerical simulations for different Reynolds numbers flow behavior is well captured. The stability analysis of the improved version of immersed boundary method is tested for different values of feedback forcing coefficients.

Keywords: Feedback Forcing Scheme, Finite Volume Method, Immersed Boundary Method, Navier-Stokes Equations

Procedia PDF Downloads 297
2422 Potential Impacts of Warming Climate on Contributions of Runoff Components from Two Catchments of Upper Indus Basin, Karakoram, Pakistan

Authors: Syed Hammad Ali, Rijan Bhakta Kayastha, Ahuti Shrestha, Iram Bano

Abstract:

The hydrology of Upper Indus basin is not recognized well due to the intricacies in the climate and geography, and the scarcity of data above 5000 meters above sea level where most of the precipitation falls in the form of snow. The main objective of this study is to measure the contributions of different components of runoff in Upper Indus basin. To achieve this goal, the Modified positive degree-day model (MPDDM) was used to simulate the runoff and investigate its components in two catchments of Upper Indus basin, Hunza and Gilgit River basins. These two catchments were selected because of their different glacier coverage, contrasting area distribution at high altitudes and significant impact on the Upper Indus River flow. The components of runoff like snow-ice melt and rainfall-base flow were identified by the model. The simulation results show that the MPDDM shows a good agreement between observed and modeled runoff of these two catchments and the effects of snow-ice are mainly reliant on the catchment characteristics and the glaciated area. For Gilgit River basin, the largest contributor to runoff is rain-base flow, whereas large contribution of snow-ice melt observed in Hunza River basin due to its large fraction of glaciated area. This research will not only contribute to the better understanding of the impacts of climate change on the hydrological response in the Upper Indus, but will also provide guidance for the development of hydropower potential, water resources management and offer a possible evaluation of future water quantity and availability in these catchments.

Keywords: future discharge projection, positive degree day, regional climate model, water resource management

Procedia PDF Downloads 339
2421 A Study on the Different Components of a Typical Back-Scattered Chipless RFID Tag Reflection

Authors: Fatemeh Babaeian, Nemai Chandra Karmakar

Abstract:

Chipless RFID system is a wireless system for tracking and identification which use passive tags for encoding data. The advantage of using chipless RFID tag is having a planar tag which is printable on different low-cost materials like paper and plastic. The printed tag can be attached to different items in the labelling level. Since the price of chipless RFID tag can be as low as a fraction of a cent, this technology has the potential to compete with the conventional optical barcode labels. However, due to the passive structure of the tag, data processing of the reflection signal is a crucial challenge. The captured reflected signal from a tag attached to an item consists of different components which are the reflection from the reader antenna, the reflection from the item, the tag structural mode RCS component and the antenna mode RCS of the tag. All these components are summed up in both time and frequency domains. The effect of reflection from the item and the structural mode RCS component can distort/saturate the frequency domain signal and cause difficulties in extracting the desired component which is the antenna mode RCS. Therefore, it is required to study the reflection of the tag in both time and frequency domains to have a better understanding of the nature of the captured chipless RFID signal. The other benefits of this study can be to find an optimised encoding technique in tag design level and to find the best processing algorithm the chipless RFID signal in decoding level. In this paper, the reflection from a typical backscattered chipless RFID tag with six resonances is analysed, and different components of the signal are separated in both time and frequency domains. Moreover, the time domain signal corresponding to each resonator of the tag is studied. The data for this processing was captured from simulation in CST Microwave Studio 2017. The outcome of this study is understanding different components of a measured signal in a chipless RFID system and a discovering a research gap which is a need to find an optimum detection algorithm for tag ID extraction.

Keywords: antenna mode RCS, chipless RFID tag, resonance, structural mode RCS

Procedia PDF Downloads 180
2420 Clinical Evaluation of Neutrophil to Lymphocytes Ratio and Platelets to Lymphocytes Ratio in Immune Thrombocytopenic Purpura

Authors: Aisha Arshad, Samina Naz Mukry, Tahir Shamsi

Abstract:

Background: Immune thrombocytopenia (ITP) is an autoimmune disorder. Besides platelets counts, immature platelets fraction (IPF) can be used as tool to predict megakaryocytic activity in ITP patients. The clinical biomarkers like Neutrophils to lymphocytes ratio (NLR) and platelet to lymphocytes ratio(PLR) predicts inflammation and can be used as prognostic markers.The present study was planned to assess the ratios in ITP and their utility in predicting prognosis after treatment. Methods: A total of 111 patients of ITP with same number of healthy individuals were included in this case control study during the period of January 2015 to December 2017.All the ITP patients were grouped according to guidelines of International working group of ITP. A 3cc blood was collected in EDTA tube and blood parameters were evaluated using Sysmex 1000 analyzer.The ratios were calculated by using absolute counts of Neutrophils,Lymphocytes and platelets.The significant (p=<0.05) difference between ITP patients and healthy control groups was determined by Kruskal wallis test, Dunn’s test and spearman’s correlation test was done using SPSS version 23. Results: The significantly raised total leucocytes counts (TLC) and IPF along with low platelets counts were observed in ITP patients as compared to healthy controls.In ITP groups,very low platelet count with median and IQR of 2(3.8)3x109/l with highest mean and IQR IPF 25.4(19.8)% was observed in newly diagnosed ITP group. The NLR was high with prognosis of disease as higher levels were observed in P-ITP. The PLR was significantly low in ND-ITP ,P-ITP, C-ITP, R-ITP and compared to controls with p=<0.001 as platelet were less in number in all ITP patients. Conclusion: The IPF can be used in evaluation of bone marrow response in ITP. The simple, reliable and calculated NLR and PLR ratios can be used in predicting prognosis and response to treatment in ITP and to some extend the severity of disease.

Keywords: neutrophils, platelets, lymphocytes, infection

Procedia PDF Downloads 83
2419 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review

Authors: Siddharatha Sharma

Abstract:

With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.

Keywords: fingerprinting, biometrics, human beings, authentication

Procedia PDF Downloads 312