Search results for: gauge repeatability and reproducibility
122 Developing Index of Democratic Institutions' Vulnerability
Authors: Kamil Jonski
Abstract:
Last year vividly demonstrated, that populism and political instability can endanger democratic institutions in countries regarded as democratic transition champions (Poland) or cornerstones of liberal order (UK, US). So called ‘illiberal democracy’ is winning hearts and minds of voters, keen to believe that rule of strongman is a viable alternative to perceived decay of western values and institutions. These developments pose a serious threat to the democratic institutions (including rule of law), proven critical for both personal freedom and economic development. Although scholars proposed some structural explanations of the illiberal wave (notably focusing on inequality, stagnant incomes and drawbacks of globalization), they seem to have little predictive value. Indeed, events like Trump’s victory, Brexit or Polish shift towards populist nationalism always came as a surprise. Intriguingly, in the case of US election, simple rules like ‘Bread and Peace model’ gauged prospects of Trump’s victory better than pundits and pollsters. This paper attempts to compile set of indicators, in order to gauge various democracies’ vulnerability to populism, instability and pursuance of ‘illiberal’ projects. Among them, it identifies the gap between consensus assessment of institutional performance (as measured by WGI indicators) and citizens’ subjective assessment (survey based confidence in institutions). Plotting these variables against each other, reveals three clusters of countries – ‘predictable’ (good institutions and high confidence, poor institutions and low confidence), ‘blind’ (poor institutions, high confidence e.g. Uzbekistan or Azerbaijan) and ‘disillusioned’ (good institutions, low confidence e.g. Spain, Chile, Poland and US). It seems that this clustering – carried out separately for various institutions (like legislature, executive and courts) and blended with economic indicators like inequality and living standards (using PCA) – offers reasonably good watchlist of countries, that should ‘expect the unexpected’.Keywords: illiberal democracy, populism, political instability, political risk measurement
Procedia PDF Downloads 204121 Experimental Investigation on Performance of Beam Column Frames with Column Kickers
Authors: Saiada Fuadi Fancy, Fahim Ahmed, Shofiq Ahmed, Raquib Ahsan
Abstract:
The worldwide use of reinforced concrete construction stems from the wide availability of reinforcing steel as well as concrete ingredients. However, concrete construction requires a certain level of technology, expertise, and workmanship, particularly, in the field during construction. As a supporting technology for a concrete column or wall construction, kicker is cast as part of the slab or foundation to provide a convenient starting point for a wall or column ensuring integrity at this important junction. For that reason, a comprehensive study was carried out here to investigate the behavior of reinforced concrete frame with different kicker parameters. To achieve this objective, six half-scale specimens of portal reinforced concrete frame with kickers and one portal frame without kicker were constructed according to common practice in the industry and subjected to cyclic incremental horizontal loading with sustained gravity load. In this study, the experimental data, obtained in four deflections controlled cycle, were used to evaluate the behavior of kickers. Load-displacement characteristics were obtained; maximum loads and deflections were measured and assessed. Finally, the test results of frames constructed with three different types of kicker thickness were compared with the kickerless frame. Similar crack patterns were observed for all the specimens. From this investigation, specimens with kicker thickness 3″ were shown better results than specimens with kicker thickness 1.5″, which was specified by maximum load, stiffness, initiation of first crack and residual displacement. Despite of better performance, it could not be firmly concluded that 4.5″ kicker thickness is the most appropriate one. Because, during the test of that specimen, separation of dial gauge was needed. Finally, comparing with kickerless specimen, it was observed that performance of kickerless specimen was relatively better than kicker specimens.Keywords: crack, cyclic, kicker, load-displacement
Procedia PDF Downloads 320120 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 100119 Green-Synthesized β-Cyclodextrin Membranes for Humidity Sensors
Authors: Zeineb Baatout, Safa Teka, Nejmeddine Jaballah, Nawfel Sakly, Xiaonan Sun, Mustapha Majdoub
Abstract:
Currently, the economic interests linked to the development of bio-based materials make biomass one of the most interesting areas for science development. We are interested in the β-cyclodextrin (β-CD), one of the popular bio-sourced macromolecule, produced from the starch via enzymatic conversion. It is a cyclic oligosaccharide formed by the association of seven glucose units. It presents a rigid conical and amphiphilic structure with hydrophilic exterior, allowing it to be water-soluble. It has also a hydrophobic interior enabling the formation of inclusion complexes, which support its application for the elaboration of electrochemical and optical sensors. Nevertheless, the solubility of β-CD in water makes its use as sensitive layer limit and difficult due to their instability in aqueous media. To overcome this limitation, we chose to precede by modification of the hydroxyl groups to obtain hydrophobic derivatives which lead to water-stable sensing layers. Hence, a series of benzylated β-CDs were synthesized in basic aqueous media in one pot. This work reports the synthesis of a new family of substituted amphiphilic β-CDs using a green methodology. The obtained β-CDs showed different degree of substitution (DS) between 0.85 and 2.03. These organic macromolecular materials were soluble in common organic volatile solvents, and their structures were investigated by NMR, FT-IR and MALDI-TOF spectroscopies. Thermal analysis showed a correlation between the thermal properties of these derivatives and the benzylation degree. The surface properties of the thin films based on the benzylated β-CDs were characterized by contact angle measurements and atomic force microscopy (AFM). These organic materials were investigated as sensitive layers, deposited on quartz crystal microbalance (QCM) gravimetric transducer, for humidity sensor at room temperature. The results showed that the performances of the prepared sensors are greatly influenced by the benzylation degree of β-CD. The partially modified β-CD (DS=1) shows linear response with best sensitivity, good reproducibility, low hysteresis, fast response time (15s) and recovery time (17s) at higher relative humidity levels (RH) between 11% and 98% in room temperature.Keywords: β-cyclodextrin, green synthesis, humidity sensor, quartz crystal microbalance
Procedia PDF Downloads 271118 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors
Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro
Abstract:
Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method
Procedia PDF Downloads 247117 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method
Authors: Murat Demir Aydin, Elanur Celebi
Abstract:
Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)
Procedia PDF Downloads 322116 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception
Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova
Abstract:
The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.Keywords: bistability, brain, noise, perception, stochastic processes
Procedia PDF Downloads 445115 Analysis in Mexico on Workers Performing Highly Repetitive Movements with Sensory Thermography in the Surface of the Wrist and Elbows
Authors: Sandra K. Enriquez, Claudia Camargo, Jesús E. Olguín, Juan A. López, German Galindo
Abstract:
Currently companies have increased the number of disorders of cumulative trauma (CTDs), these are increasing significantly due to the Highly Repetitive Movements (HRM) performed in workstations, which causes economic losses to businesses, due to temporary and permanent disabilities of workers. This analysis focuses on the prevention of disorders caused by: repeatability, duration and effort; And focuses on reducing cumulative trauma disorders such as occupational diseases using sensory thermography as a noninvasive method, the above is to evaluate the injuries could have workers to perform repetitive motions. Objectives: The aim is to define rest periods or job rotation before they generate a CTD, this sensory thermography by analyzing changes in temperature patterns on wrists and elbows when the worker is performing HRM over a period of time 2 hours and 30 minutes. Information on non-work variables such as wrist and elbow injuries, weight, gender, age, among others, and work variables such as temperature workspace, repetitiveness and duration also met. Methodology: The analysis to 4 industrial designers, 2 men and 2 women to be specific was conducted in a business in normal health for a period of 12 days, using the following time ranges: the first day for every 90 minutes continuous work were asked to rest 5 minutes, the second day for every 90 minutes of continuous work were asked to rest 10 minutes, the same to work 60 and 30 minutes straight. Each worker was tested with 6 different ranges at least twice. This analysis was performed in a controlled room temperature between 20 and 25 ° C, and a time to stabilize the temperature of the wrists and elbows than 20 minutes at the beginning and end of the analysis. Results: The range time of 90 minutes working continuous and a rest of 5 minutes of activity is where the maximum temperature (Tmax) was registered in the wrists and elbows in the office, we found the Tmax was 35.79 ° C with a difference of 2.79 ° C between the initial and final temperature of the left elbow presented at the individual 4 during the 86 minutes, in of range in 90 minutes continuously working and rested for 5 minutes of your activity. Conclusions: It is possible with this alternative technology is sensory thermography predict ranges of rotation or rest for the prevention of CTD to perform HRM work activities, obtaining with this reduce occupational disease, quotas by health agencies and increasing the quality of life of workers, taking this technology a cost-benefit acceptable in the future.Keywords: sensory thermography, temperature, cumulative trauma disorder (CTD), highly repetitive movement (HRM)
Procedia PDF Downloads 429114 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 140113 Application of Ground Penetrating Radar and Light Falling Weight Deflectometer in Ballast Quality Assessment
Authors: S. Cafiso, B. Capace, A. Di Graziano, C. D’Agostino
Abstract:
Systematic monitoring of the trackbed is necessary to assure safety and quality of service in the railway system. Moreover, to produce effective management of the maintenance treatments, the assessment of bearing capacity of the railway trackbed must include ballast, sub-ballast and subgrade layers at different depths. Consequently, there is an increasing interest in obtaining a consistent measure of ballast bearing capacity with no destructive tests (NDTs) able to work in the physical and time restrictions of railway tracks in operation. Moreover, in the case of the local railway with reduced gauge, the use of the traditional high-speed track monitoring systems is not feasible. In that framework, this paper presents results from in site investigation carried out on ballast and sleepers with Ground Penetrating Radar (GPR) and Light Falling Weight Deflectometer (LWD). These equipment are currently used in road pavement maintenance where they have shown their reliability and effectiveness. Application of such Non-Destructive Tests in railway maintenance is promising but in the early stage of the investigation. More specifically, LWD was used to estimate the stiffness of ballast and sleeper support, as well. LWD, despite the limited load (6 kN in the trial test) applied directly on the sleeper, was able to detect defects in the bearing capacity at the Sleeper/Ballast interface. A dual frequency GPR was applied to detect the presence of layers’ discontinuities at different depths due to fouling phenomena that are the main causes of changing in the layer dielectric proprieties within the ballast thickness. The frequency of 2000Mhz provided high-resolution data to approximately 0.4m depth, while frequency of 600Mhz showed greater depth penetration up to 1.5 m. In the paper literature review and trial in site experience are used to identify Strengths, Weaknesses, Opportunities, and Threats (SWOT analysis) of the application of GPR and LWD for the assessment of bearing capacity of railway track-bed.Keywords: bearing capacity, GPR, LWD, no destructive test, railway track
Procedia PDF Downloads 128112 Needle Track Technique In Strabismus Surgery
Authors: Seema Dutt Bandhu, Yashi Bansal, Tania Moudgil, Barinder Kaur
Abstract:
Introduction: Scleral perforation during the passage of suture needle is a known complication of strabismus surgery. The present study was conducted to evolve a safe and easy technique of passing the suture needle through the sclera. A scleral tunnel was created with a 26-guage needle through which the suture needle was passed. The rest of the steps of strabismus surgery were carried out as usual. Material and Methods: After taking clearance from the Institutional Ethics Committee, an interventional study was carried out on twenty patients. The scleral tunnel technique was performed on the patients of strabismus after taking written informed consent. Before passing the suture needle through the sclera during strabismus surgery, a tunnel through approximately half the thickness of the sclera was created with the help of a bent 26-gauge needle. The suture needle was then passed through this tunnel. Rest of the steps of the surgery were carried out in the conventional manner. In a control group of same number of patients, the surgery was performed in the conventional method. Both the groups were followed up for any complications. Ease of passing suture and surgeons’ satisfaction with the technique was noted on a 10-point Likert scale. Results: None of the patients in either group suffered from any complications. Four surgeons participated in the study. The average Likert scale score of the surgeons for satisfaction with the technique was 4.5 on a scale of 5. The score for ease of passage of suture needle was 5 on a score of 5. Discussion: Scleral perforation during passing the sutures through the sclera is a known complication of strabismus surgery. Incidence reported is 7.8% It occurs due to inappropriate engagement of the scleral tissue or passage of the suture needle along a wrong axis during the process of passing the suture needle. The needle track technique eases the passage of passing the suture needle through the sclera as the engagement of the scleral tissue can be done with greater control with a 26-guage needle. The surgeons have reported that they are highly satisfied with the technique and they have reported that the technique eased the passage of the suture needle through the sclera.Keywords: suture, scleral tunnel, strabismus, scleral perforation
Procedia PDF Downloads 79111 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 118110 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 154109 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics
Authors: Julia Zimmerman, Gaurav Savant
Abstract:
This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling
Procedia PDF Downloads 199108 Interactions and Integration: Implications of Victim-Agent Portrayals for Refugees and Asylum Seekers in Germany
Authors: Denise Muro
Abstract:
Conflict in Syria, producing over 11 million displaced persons, has incited global attention to displacement. Although neighboring countries have borne the largest part of the displacement burden, due to the influx of refugees into Europe, the so-called ‘refugee crisis’ is taking place on two fronts: Syria’s neighboring countries, with millions of refugees, and Europe, a destination goal for so many that European states face unprecedented challenges. With increasing attention to displacement, forcibly displaced persons are consistently portrayed as either un-agentic victims, or as dangerous free agents. Recognizing that these dominant portrayals involve discourses of power and inequality, this research investigates the extent to which this victim-agent dichotomy affects refugees and organizations that work closely with them during initial integration processes in Berlin, Germany. The research measures initial integration based on German policy measures regarding integration juxtaposed with the way refugees and those who work with them understand integration. Additionally, the study examines day-to-day interactions of refugees in Germany as a way to gauge social integration in a bottom-up approach. This study involved a discourse analysis of portrayals of refugees and participant observation and interviews with refugees and those who work closely with them, which took place during fieldwork in Berlin in the summer of 2016. Germany is unique regarding their migration history and lack of successful integration, in part due to the persistent refrain, ‘Wir sind kein einwanderungsland’ (‘We are not an immigration country’). Still, their accepted asylum seeker population has grown exponentially in the past few years. Findings suggest that the victim-agent dichotomy is present and impactful in the process of refugees entering and integrating into Germany. Integration is hindered due to refugees either being patronized or criminalized to such an extent that, despite being constantly told that they must integrate, they cannot become part of German society.Keywords: discourse analysis, Germany, integration, refugee crisis
Procedia PDF Downloads 273107 Comparative Study of Various Treatment Positioning Technique: A Site Specific Study-CA. Breast
Authors: Kamal Kaushik, Dandpani Epili, Ajay G. V., Ashutosh, S. Pradhaan
Abstract:
Introduction: Radiation therapy has come a long way over a period of decades, from 2-dimensional radiotherapy to intensity-modulated radiation therapy (IMRT) or VMAT. For advanced radiation therapy, we need better patient position reproducibility to deliver precise and quality treatment, which raises the need for better image guidance technologies for precise patient positioning. This study presents a two tattoo simulation with roll correction technique which is comparable to other advanced patient positioning techniques. Objective: This is a site-specific study is aimed to perform a comparison between various treatment positioning techniques used for the treatment of patients of Ca- Breast undergoing radiotherapy. In this study, we are comparing 5 different positioning methods used for the treatment of ca-breast, namely i) Vacloc with 3 tattoos, ii) Breast board with three tattoos, iii) Thermoplastic cast with three fiducials, iv) Breast board with a thermoplastic mask with 3 tattoo, v) Breast board with 2 tattoos – A roll correction method. Methods and material: All in one (AIO) solution immobilization was used in all patient positioning techniques for immobilization. The process of two tattoo simulations includes positioning of the patient with the help of a thoracic-abdomen wedge, armrest & knee rest. After proper patient positioning, we mark two tattoos on the treatment side of the patient. After positioning, place fiducials as per the clinical borders markers (1) sternum notch (lower border of clavicle head) (2) 2 cm below from contralateral breast (3) midline between 1 & 2 markers (4) mid axillary on the same axis of 3 markers (Marker 3 & 4 should be on the same axis). During plan implementation, a roll depth correction is applied as per the anterior and lateral positioning tattoos, followed by the shifts required for the Isocentre position. The shifts are then verified by SSD on the patient surface followed by radiographic verification using Cone Beam Computed Tomography (CBCT). Results: When all the five positioning techniques were compared all together, the produced shifts in Vertical, Longitudinal and lateral directions are as follows. The observations clearly suggest that the Longitudinal average shifts in two tattoo roll correction techniques are less than every other patient positioning technique. Vertical and lateral Shifts are also comparable to other modern positioning techniques. Concluded: The two tattoo simulation with roll correction technique provides us better patient setup with a technique that can be implemented easily in most of the radiotherapy centers across the developing nations where 3D verification techniques are not available along with delivery units as the shifts observed are quite minimal and are comparable to those with Vacloc and modern amenities.Keywords: Ca. breast, breast board, roll correction technique, CBCT
Procedia PDF Downloads 135106 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada
Authors: Bilel Chalghaf, Mathieu Varin
Abstract:
Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR
Procedia PDF Downloads 134105 Studies on Organic and Inorganic Micro/Nano Particle Reinforced Epoxy Composites
Authors: Daniel Karthik, Vijay Baheti, Jiri Militky, Sundaramurthy Palanisamy
Abstract:
Fibre based nano particles are presently considered as one of the potential filler materials for the improvement of mechanical and physical properties of polymer composites. Due to high matrix-filler interfacial area there will be uniform and homogeneous dispersion of nanoparticles. In micro/nano filler reinforced composites, resin material is usually tailored by organic or inorganic nanoparticles to have improved matrix properties. The objective of this study was to compare the potential of reinforcement of different organic and inorganic micro/nano fillers in epoxy composites. Industrial and agricultural waste of fibres like Agave Americana, cornhusk, jute, basalt, carbon, glass and fly ash was utilized to prepare micro/nano particles. Micro/nano particles were obtained using high energy planetary ball milling process in dry condition. Milling time and ball size were kept constant throughout the ball milling process. Composites were fabricated by hand lay method. Particle loading was kept constant to 3% wt. for all composites. In present study, loading of fillers was selected as 3 wt. % for all composites. Dynamic mechanical properties of the nanocomposite films were performed in three-point bending mode with gauge length and sample width of 50 mm and 10 mm respectively. The samples were subjected to an oscillating frequency of 1 Hz, 5 Hz and 10 Hz and 100 % oscillating amplitude in the temperature ranges of 30°C to 150°C at the heating rate of 3°C/min. Damping was found to be higher with the jute composites. Amongst organic fillers lowest damping factor was observed with Agave Americana particles, this means that Agave americana fibre particles have betters interface adhesion with epoxy resin. Basalt, fly ash and glass particles have almost similar damping factors confirming better interface adhesion with epoxy.Keywords: ball milling, damping factor, matrix-filler interface, particle reinforcements
Procedia PDF Downloads 264104 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)
Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier
Abstract:
The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance
Procedia PDF Downloads 156103 Microstructure Dependent Fatigue Crack Growth in Aluminum Alloy
Authors: M. S. Nandana, K. Udaya Bhat, C. M. Manjunatha
Abstract:
In this study aluminum alloy 7010 was subjected to three different ageing treatments i.e., peak ageing (T6), over-ageing (T7451) and retrogression and re ageing (RRA) to study the influence of precipitate microstructure on the fatigue crack growth rate behavior. The microstructural modification was studied by using transmission electron microscope (TEM) to examine the change in the size and morphology of precipitates in the matrix and on the grain boundaries. The standard compact tension (CT) specimens were fabricated and tested under constant amplitude fatigue crack growth tests to evaluate the influence of heat treatment on the fatigue crack growth rate properties. The tests were performed in a computer-controlled servo-hydraulic test machine applying a load ratio, R = 0.1 at a loading frequency of 10 Hz as per ASTM E647. The fatigue crack growth was measured by adopting compliance technique using a CMOD gauge attached to the CT specimen. The average size of the matrix precipitates were found to be of 16-20 nm in T7451, 5-6 nm in RRA and 2-3 nm in T6 conditions respectively. The grain boundary precipitate which was continuous in T6, was disintegrated in RRA and T7451 condition. The PFZ width was lower in RRA compared to T7451 condition. The crack growth rate was higher in T7451 and lowest in RRA treated alloy. The RRA treated alloy also exhibits an increase in threshold stress intensity factor range (∆Kₜₕ). The ∆Kₜₕ measured was 11.1, 10.3 and 5.7 MPam¹/² in RRA, T6 and T7451 alloys respectively. The fatigue crack growth rate in RRA treated alloy was nearly 2-3 times lower than that in T6 and was one order lower than that observed in T7451 condition. The surface roughness of RRA treated alloy was more pronounced when compared to the other conditions. The reduction in fatigue crack growth rate in RRA alloy was majorly due to the increase in roughness and partially due to increase in spacing between the matrix precipitates. The reduction in crack growth rate and increase in threshold stress intensity range is expected to benefit the damage tolerant capability of aircraft structural components under service loads.Keywords: damage tolerance, fatigue, heat treatment, PFZ, RRA
Procedia PDF Downloads 154102 Development and Validation of a Semi-Quantitative Food Frequency Questionnaire for Use in Urban and Rural Communities of Rwanda
Authors: Phenias Nsabimana, Jérôme W. Some, Hilda Vasanthakaalam, Stefaan De Henauw, Souheila Abbeddou
Abstract:
Tools for the dietary assessment in adults are limited in low- and middle-income settings. The objective of this study was to develop and validate a semi-quantitative food frequency questionnaire (FFQ) against the multiple pass-24 h recall tool for use in urban and rural Rwanda. A total of 212 adults (154 females and 58 males), 18-49 aged, including 105 urban and 107 rural residents, from the four regions of Rwanda, were recruited in the present study. A multiple-pass 24- H recall technique was used to collect dietary data in both urban and rural areas in four different rounds, on different days (one weekday and one weekend day), separated by a period of three months, from November 2020 to October 2021. The details of all the foods and beverages consumed over the 24h period of the day prior to the interview day were collected during face-to-face interviews. A list of foods, beverages, and commonly consumed recipes was developed by the study researchers and ten research assistants from the different regions of Rwanda. Non-standard recipes were collected when the information was available. A single semi-quantitative FFQ was also developed in the same group discussion prior to the beginning of the data collection. The FFQ was collected at the beginning and the end of the data collection period. Data were collected digitally. The amount of energy and macro-nutrients contributed by each food, recipe, and beverage will be computed based on nutrient composition reported in food composition tables and weight consumed. Median energy and nutrient contents of different food intakes from FFQ and 24-hour recalls and median differences (24-hour recall –FFQ) will be calculated. Kappa, Spearman, Wilcoxon, and Bland-Altman plot statistics will be conducted to evaluate the correlation between estimated nutrient and energy intake found by the two methods. Differences will be tested for their significance and all analyses will be done with STATA 11. Data collection was completed in November 2021. Data cleaning is ongoing and the data analysis is expected to be completed by July 2022. A developed and validated semi-quantitative FFQ will be available for use in dietary assessment. The developed FFQ will help researchers to collect reliable data that will support policy makers to plan for proper dietary change intervention in Rwanda.Keywords: food frequency questionnaire, reproducibility, 24-H recall questionnaire, validation
Procedia PDF Downloads 141101 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement
Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian
Abstract:
Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality
Procedia PDF Downloads 280100 Science Anxiety Levels in Emirati Pre-Service Teachers
Authors: Martina Dickson, Hanadi Kadbey, Melissa Mcminn
Abstract:
Research has shown that anxiety and trepidation towards learning about science is prevalent among elementary school teachers in Western countries. It has also been shown repeatedly that pre-service and in-service teachers who show signs of anxiety towards science are; a) less likely to teach it at all, where they have some autonomy over this, b) less likely to teach it effectively c) ultimately that their students have lower attainment scores in science. It is therefore critically important to gauge pre-service teachers’ science anxiety levels early on whilst there are still possibilities to overturn some of the reasons behind these fears and avert these serious issues occurring later on. This study takes place in the capital of the United Arab Emirates (U.A.E.) in the context of training local elementary school teachers. In the U.A.E., where Emirati teachers are already in the vast minority and attrition rates are high, it is important to offer as much support to pre-service teachers as possible. If pre-service teachers are graduating with high levels of science anxiety unabated, according to the research there is a very real concern that as generalist primary school teachers, their science teaching will be far from optimal. The aims of this research study were to ascertain the science anxiety levels of pre-service elementary teachers and to identify particular areas of their science anxiety, if appropriate. We surveyed 200 Emirati pre-service teachers and found that levels of science anxiety were directly related to their perceptions of performance in science exams, laboratory experiments and inquiry approaches to science learning. Whilst some studies have shown that science anxiety can decrease as students gain confidence in science knowledge by studying courses, we did not see this effect in our study. This is based upon a theoretical framework which holds that in some cases, science anxiety is related to lack of exposure to, or insecurity with science content itself which in some cases is alleviated by the students’ covering of material and greater confidence in the subject. Exploring this variable allowed us to explore whether students educated in schools influenced by the educational reform in Abu Dhabi have differing science anxiety levels from those who were educated prior to the reforms. We discuss the possible implications of these findings to the future teaching of science in Abu Dhabi public schools.Keywords: pre-service teachers, science anxiety, United Arab Emirates, educational reform
Procedia PDF Downloads 33399 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis
Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin
Abstract:
With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism
Procedia PDF Downloads 26698 Estimation of Rock Strength from Diamond Drilling
Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi
Abstract:
The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength
Procedia PDF Downloads 13797 Studying the Simultaneous Effect of Petroleum and DDT Pollution on the Geotechnical Characteristics of Sands
Authors: Sara Seyfi
Abstract:
DDT and petroleum contamination in coastal sand alters the physical and mechanical properties of contaminated soils. This article aims to understand the effects of DDT pollution on the geotechnical characteristics of sand groups, including sand, silty sand, and clay sand. First, the studies conducted on the topic of the article will be reviewed. In the initial stage of the tests, this article deals with the identification of the used sands (sand, silty sand, clay sand) by FTIR, µ-XRF and SEM methods. Then, the geotechnical characteristics of these sand groups, including density, permeability, shear strength, compaction, and plasticity, are investigated using a sand cone, head permeability test, Vane shear test, strain gauge penetrometer, and plastic limit test. Sand groups are artificially contaminated with petroleum substances with 1, 2, 4, 8, 10, 12% by weight. In a separate experiment, amounts of 2, 4, 8, 12, 16, 20 mg/liter of DDT were added to the sand groups. Geotechnical characteristics and identification analysis are performed on the contaminated samples. In the final tests, the mentioned amounts of oil pollution and DDT are simultaneously added to the sand groups, and identification and measurement processes are carried out. The results of the tests showed that petroleum contamination had reduced the optimal moisture content, permeability, and plasticity of all samples. Except silty sand’s plasticity, which petroleum increased it by 1-4% and decreased it by 8-12%. The dry density of sand and clay sand increased, but that of silty sand decreased. Also, the shear strength of sand and silty sand increased, but that of clay sand decreased. DDT contamination increased the maximum dry density and decreased the permeability of all samples. It also reduced the optimum moisture content of the sand. The shear resistance of silty sand and clayey sand decreased, and plasticity of clayey sand increased, and silty sand decreased. The simultaneous effect of petroleum and DDT pollution on the maximum dry density of sand and clayey sand has been synergistic, on the plasticity of clayey sand and silty sand, there has been antagonism. This process has caused antagonism of optimal sand content, shear strength of silty sand and clay sand. In other cases, the effect of synergy or antagonism is not observed.Keywords: DDT contamination, geotechnical characteristics, petroleum contamination, sand
Procedia PDF Downloads 4896 Real-Time Quantitative Polymerase Chain Reaction Assay for the Detection of microRNAs Using Bi-Directional Extension Sequences
Authors: Kyung Jin Kim, Jiwon Kwak, Jae-Hoon Lee, Soo Suk Lee
Abstract:
MicroRNAs (miRNA) are a class of endogenous, single-stranded, small, and non-protein coding RNA molecules typically 20-25 nucleotides long. They are thought to regulate the expression of other genes in a broad range by binding to 3’- untranslated regions (3’-UTRs) of specific mRNAs. The detection of miRNAs is very important for understanding of the function of these molecules and in the diagnosis of variety of human diseases. However, detection of miRNAs is very challenging because of their short length and high sequence similarities within miRNA families. So, a simple-to-use, low-cost, and highly sensitive method for the detection of miRNAs is desirable. In this study, we demonstrate a novel bi-directional extension (BDE) assay. In the first step, a specific linear RT primer is hybridized to 6-10 base pairs from the 3’-end of a target miRNA molecule and then reverse transcribed to generate a cDNA strand. After reverse transcription, the cDNA was hybridized to the 3’-end which is BDE sequence; it played role as the PCR template. The PCR template was amplified in an SYBR green-based quantitative real-time PCR. To prove the concept, we used human brain total RNA. It could be detected quantitatively in the range of seven orders of magnitude with excellent linearity and reproducibility. To evaluate the performance of BDE assay, we contrasted sensitivity and specificity of the BDE assay against a commercially available poly (A) tailing method using miRNAs for let-7e extracted from A549 human epithelial lung cancer cells. The BDE assay displayed good performance compared with a poly (A) tailing method in terms of specificity and sensitivity; the CT values differed by 2.5 and the melting curve showed a sharper than poly (A) tailing methods. We have demonstrated an innovative, cost-effective BDE assay that allows improved sensitivity and specificity in detection of miRNAs. Dynamic range of the SYBR green-based RT-qPCR for miR-145 could be represented quantitatively over a range of 7 orders of magnitude from 0.1 pg to 1.0 μg of human brain total RNA. Finally, the BDE assay for detection of miRNA species such as let-7e shows good performance compared with a poly (A) tailing method in terms of specificity and sensitivity. Thus BDE proves a simple, low cost, and highly sensitive assay for various miRNAs and should provide significant contributions in research on miRNA biology and application of disease diagnostics with miRNAs as targets.Keywords: bi-directional extension (BDE), microRNA (miRNA), poly (A) tailing assay, reverse transcription, RT-qPCR
Procedia PDF Downloads 16695 Influence of Microparticles in the Contact Region of Quartz Sand Grains: A Micro-Mechanical Experimental Study
Authors: Sathwik Sarvadevabhatla Kasyap, Kostas Senetakis
Abstract:
The mechanical behavior of geological materials is very complex, and this complexity is related to the discrete nature of soils and rocks. Characteristics of a material at the grain scale such as particle size and shape, surface roughness and morphology, and particle contact interface are critical to evaluate and better understand the behavior of discrete materials. This study investigates experimentally the micro-mechanical behavior of quartz sand grains with emphasis on the influence of the presence of microparticles in their contact region. The outputs of the study provide some fundamental insights on the contact mechanics behavior of artificially coated grains and can provide useful input parameters in the discrete element modeling (DEM) of soils. In nature, the contact interfaces between real soil grains are commonly observed with microparticles. This is usually the case of sand-silt and sand-clay mixtures, where the finer particles may create a coating on the surface of the coarser grains, altering in this way the micro-, and thus the macro-scale response of geological materials. In this study, the micro-mechanical behavior of Leighton Buzzard Sand (LBS) quartz grains, with interference of different microparticles at their contact interfaces is studied in the laboratory using an advanced custom-built inter-particle loading apparatus. Special techniques were adopted to develop the coating on the surfaces of the quartz sand grains so that to establish repeatability of the coating technique. The characterization of the microstructure of coated particles on their surfaces was based on element composition analyses, microscopic images, surface roughness measurements, and single particle crushing strength tests. The mechanical responses such as normal and tangential load – displacement behavior, tangential stiffness behavior, and normal contact behavior under cyclic loading were studied. The behavior of coated LBS particles is compared among different classes of them and with pure LBS (i.e. surface cleaned to remove any microparticles). The damage on the surface of the particles was analyzed using microscopic images. Extended displacements in both normal and tangential directions were observed for coated LBS particles due to the plastic nature of the coating material and this varied with the variation of the amount of coating. The tangential displacement required to reach steady state was delayed due to the presence of microparticles in the contact region of grains under shearing. Increased tangential loads and coefficient of friction were observed for the coated grains in comparison to the uncoated quartz grains.Keywords: contact interface, microparticles, micro-mechanical behavior, quartz sand
Procedia PDF Downloads 19294 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)
Authors: Eliane G. Tótoli, Hérida Regina N. Salgado
Abstract:
Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region
Procedia PDF Downloads 38193 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 283