Search results for: carbon dioxide extraction of garlic
327 Analysis and Modeling of Graphene-Based Percolative Strain Sensor
Authors: Heming Yao
Abstract:
Graphene-based percolative strain gauges could find applications in many places such as touch panels, artificial skins or human motion detection because of its advantages over conventional strain gauges such as flexibility and transparency. These strain gauges rely on a novel sensing mechanism that depends on strain-induced morphology changes. Once a compression or tension strain is applied to Graphene-based percolative strain gauges, the overlap area between neighboring flakes becomes smaller or larger, which is reflected by the considerable change of resistance. Tiny strain change on graphene-based percolative strain sensor can act as an important leverage to tremendously increase resistance of strain sensor, which equipped graphene-based percolative strain gauges with higher gauge factor. Despite ongoing research in the underlying sensing mechanism and the limits of sensitivity, neither suitable understanding has been obtained of what intrinsic factors play the key role in adjust gauge factor, nor explanation on how the strain gauge sensitivity can be enhanced, which is undoubtedly considerably meaningful and provides guideline to design novel and easy-produced strain sensor with high gauge factor. We here simulated the strain process by modeling graphene flakes and its percolative networks. We constructed the 3D resistance network by simulating overlapping process of graphene flakes and interconnecting tremendous number of resistance elements which were obtained by fractionizing each piece of graphene. With strain increasing, the overlapping graphenes was dislocated on new stretched simulation graphene flake simulation film and a new simulation resistance network was formed with smaller flake number density. By solving the resistance network, we can get the resistance of simulation film under different strain. Furthermore, by simulation on possible variable parameters, such as out-of-plane resistance, in-plane resistance, flake size, we obtained the changing tendency of gauge factor with all these variable parameters. Compared with the experimental data, we verified the feasibility of our model and analysis. The increase of out-of-plane resistance of graphene flake and the initial resistance of sensor, based on flake network, both improved gauge factor of sensor, while the smaller graphene flake size gave greater gauge factor. This work can not only serve as a guideline to improve the sensitivity and applicability of graphene-based strain sensors in the future, but also provides method to find the limitation of gauge factor for strain sensor based on graphene flake. Besides, our method can be easily transferred to predict gauge factor of strain sensor based on other nano-structured transparent optical conductors, such as nanowire and carbon nanotube, or of their hybrid with graphene flakes.Keywords: graphene, gauge factor, percolative transport, strain sensor
Procedia PDF Downloads 418326 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 177325 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine
Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins
Abstract:
Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG
Procedia PDF Downloads 149324 Geochemical Study of Natural Bitumen, Condensate and Gas Seeps from Sousse Area, Central Tunisia
Authors: Belhaj Mohamed, M. Saidi, N. Boucherab, N. Ouertani, I. Bouazizi, M. Ben Jrad
Abstract:
Natural hydrocarbon seepage has helped petroleum exploration as a direct indicator of gas and/or oil subsurface accumulations. Surface macro-seeps are generally an indication of a fault in an active Petroleum Seepage System belonging to a Total Petroleum System. This paper describes a case study in which multiple analytical techniques were used to identify and characterize trace petroleum-related hydrocarbons and other volatile organic compounds in groundwater samples collected from Sousse aquifer (Central Tunisia). The analytical techniques used for analyses of water samples included gas chromatography-mass spectrometry (GC-MS), capillary GC with flame-ionization detection, Compund Specific Isotope Analysis, Rock Eval Pyrolysis. The objective of the study was to confirm the presence of gasoline and other petroleum products or other volatile organic pollutants in those samples in order to assess the respective implication of each of the potentially responsible parties to the contamination of the aquifer. In addition, the degree of contamination at different depths in the aquifer was also of interest. The oil and gas seeps have been investigated using biomarker and stable carbon isotope analyses to perform oil-oil and oil-source rock correlations. The seepage gases are characterized by high CH4 content, very low δ13CCH4 values (-71,9 ‰) and high C1/C1–5 ratios (0.95–1.0), light deuterium–hydrogen isotope ratios (-198 ‰) and light δ13CC2 and δ13CCO2 values (-23,8‰ and-23,8‰ respectively) indicating a thermogenic origin with the contribution of the biogenic gas. An organic geochemistry study was carried out on the more ten oil seep samples. This study includes light hydrocarbon and biomarkers analyses (hopanes, steranes, n-alkanes, acyclic isoprenoids, and aromatic steroids) using GC and GC-MS. The studied samples show at least two distinct families, suggesting two different types of crude oil origins: the first oil seeps appears to be highly mature, showing evidence of chemical and/or biological degradation and was derived from a clay-rich source rock deposited in suboxic conditions. It has been sourced mainly by the lower Fahdene (Albian) source rocks. The second oil seeps was derived from a carbonate-rich source rock deposited in anoxic conditions, well correlated with the Bahloul (Cenomanian-Turonian) source rock.Keywords: biomarkers, oil and gas seeps, organic geochemistry, source rock
Procedia PDF Downloads 445323 Lake Water Surface Variations and Its Influencing Factors in Tibetan Plateau in Recent 10 Years
Authors: Shanlong Lu, Jiming Jin, Xiaochun Wang
Abstract:
The Tibetan Plateau has the largest number of inland lakes with the highest elevation on the planet. These massive and large lakes are mostly in natural state and are less affected by human activities. Their shrinking or expansion can truly reflect regional climate and environmental changes and are sensitive indicators of global climate change. However, due to the sparsely populated nature of the plateau and the poor natural conditions, it is difficult to effectively obtain the change data of the lake, which has affected people's understanding of the temporal and spatial processes of lake water changes and their influencing factors. By using the MODIS (Moderate Resolution Imaging Spectroradiometer) MOD09Q1 surface reflectance images as basic data, this study produced the 8-day lake water surface data set of the Tibetan Plateau from 2000 to 2012 at 250 m spatial resolution, with a lake water surface extraction method of combined with lake water surface boundary buffer analyzing and lake by lake segmentation threshold determining. Then based on the dataset, the lake water surface variations and their influencing factors were analyzed, by using 4 typical natural geographical zones of Eastern Qinghai and Qilian, Southern Qinghai, Qiangtang, and Southern Tibet, and the watersheds of the top 10 lakes of Qinghai, Siling Co, Namco, Zhari NamCo, Tangra Yumco, Ngoring, UlanUla, Yamdrok Tso, Har and Gyaring as the analysis units. The accuracy analysis indicate that compared with water surface data of the 134 sample lakes extracted from the 30 m Landsat TM (Thematic Mapper ) images, the average overall accuracy of the lake water surface data set is 91.81% with average commission and omission error of 3.26% and 5.38%; the results also show strong linear (R2=0.9991) correlation with the global MODIS water mask dataset with overall accuracy of 86.30%; and the lake area difference between the Second National Lake Survey and this study is only 4.74%, respectively. This study provides reliable dataset for the lake change research of the plateau in the recent decade. The change trends and influencing factors analysis indicate that the total water surface area of lakes in the plateau showed overall increases, but only lakes with areas larger than 10 km2 had statistically significant increases. Furthermore, lakes with area larger than 100 km2 experienced an abrupt change in 2005. In addition, the annual average precipitation of Southern Tibet and Southern Qinghai experienced significant increasing and decreasing trends, and corresponding abrupt changes in 2004 and 2006, respectively. The annual average temperature of Southern Tibet and Qiangtang showed a significant increasing trend with an abrupt change in 2004. The major reason for the lake water surface variation in Eastern Qinghai and Qilian, Southern Qinghai and Southern Tibet is the changes of precipitation, and that for Qiangtang is the temperature variations.Keywords: lake water surface variation, MODIS MOD09Q1, remote sensing, Tibetan Plateau
Procedia PDF Downloads 231322 In vitro Antimicrobial Resistance Pattern of Bovine Mastitis Bacteria in Ethiopia
Authors: Befekadu Urga Wakayo
Abstract:
Introduction: Bacterial infections represent major human and animal health problems in Ethiopia. In the face of poor antibiotic regulatory mechanisms, development of antimicrobial resistance (AMR) to commonly used drugs has become a growing health and livelihood threat in the country. Monitoring and control of AMR demand close coloration between human and veterinary services as well as other relevant stakeholders. However, risk of AMR transfer from animal to human population’s remains poorly explored in Ethiopia. This systematic research literature review attempted to give an overview on AMR challenges of bovine mastitis bacteria in Ethiopia. Methodology: A web based research literature search and analysis strategy was used. Databases are considered including; PubMed, Google Scholar, Ethiopian Veterinary Association (EVA) and Ethiopian Society of Animal Production (ESAP). The key search terms and phrases were; Ethiopia, dairy, cattle, mastitis, bacteria isolation, antibiotic sensitivity and antimicrobial resistance. Ultimately, 15 research reports were used for the current analysis. Data extraction was performed using a structured Microsoft Excel format. Frequency AMR prevalence (%) was registered directly or calculated from reported values. Statistical analysis was performed on SPSS – 16. Variables were summarized by giving frequencies (n or %), Mean ± SE and demonstrative box plots. One way ANOVA and independent t test were used to evaluate variations in AMR prevalence estimates (Ln transformed). Statistical significance was determined at p < 0.050). Results: AMR in bovine mastitis bacteria was investigated in a total of 592 in vitro antibiotic sensitivity trials involving 12 different mastitis bacteria (including 1126 Gram positive and 77 Gram negative isolates) and 14 antibiotics. Bovine mastitis bacteria exhibited AMR to most of the antibiotics tested. Gentamycin had the lowest average AMR in both Gram positive (2%) and negative (1.8%) bacteria. Gram negative mastitis bacteria showed higher mean in vitro resistance levels to; Erythromycin (72.6%), Tetracycline (56.65%), Amoxicillin (49.6%), Ampicillin (47.6%), Clindamycin (47.2%) and Penicillin (40.6%). Among Gram positive mastitis bacteria higher mean in vitro resistance was observed in; Ampicillin (32.8%), Amoxicillin (32.6%), Penicillin (24.9%), Streptomycin (20.2%), Penicillinase Resistant Penicillin’s (15.4%) and Tetracycline (14.9%). More specifically, S. aurues exhibited high mean AMR against Penicillin (76.3%) and Ampicillin (70.3%) followed by Amoxicillin (45%), Streptomycin (40.6%), Tetracycline (24.5%) and Clindamycin (23.5%). E. coli showed high mean AMR to Erythromycin (78.7%), Tetracycline (51.5%), Ampicillin (49.25%), Amoxicillin (43.3%), Clindamycin (38.4%) and Penicillin (33.8%). Streptococcus spp. demonstrated higher (p =0.005) mean AMR against Kanamycin (> 20%) and full sensitivity (100%) to Clindamycin. Overall, mean Tetracycline (p = 0.013), Gentamycin (p = 0.001), Polymixin (p = 0.034), Erythromycin (p = 0.011) and Ampicillin (p = 0.009) resistance increased from the 2010’s than the 2000’s. Conclusion; the review indicated a rising AMR challenge among bovine mastitis bacteria in Ethiopia. Corresponding, public health implications demand a deeper, integrated investigation.Keywords: antimicrobial resistance, dairy cattle, Ethiopia, Mastitis bacteria
Procedia PDF Downloads 247321 New Suspension Mechanism for a Formula Car using Camber Thrust
Authors: Shinji Kajiwara
Abstract:
The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.Keywords: automobile, camber thrust, cornering force, suspension
Procedia PDF Downloads 323320 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 264319 Preliminary Study of Water-Oil Separation Process in Three-Phase Separators Using Factorial Experimental Designs and Simulation
Authors: Caroline M. B. De Araujo, Helenise A. Do Nascimento, Claudia J. Da S. Cavalcanti, Mauricio A. Da Motta Sobrinho, Maria F. Pimentel
Abstract:
Oil production is often followed by the joint production of water and gas. During the journey up to the surface, due to severe conditions of temperature and pressure, the mixing between these three components normally occurs. Thus, the three phases separation process must be one of the first steps to be performed after crude oil extraction, where the water-oil separation is the most complex and important step, since the presence of water into the process line can increase corrosion and hydrates formation. A wide range of methods can be applied in order to proceed with oil-water separation, being more commonly used: flotation, hydrocyclones, as well as the three phase separator vessels. Facing what has been presented so far, it is the aim of this paper to study a system consisting of a three-phase separator, evaluating the influence of three variables: temperature, working pressure and separator type, for two types of oil (light and heavy), by performing two factorial design plans 23, in order to find the best operating condition. In this case, the purpose is to obtain the greatest oil flow rate in the product stream (m3/h) as well as the lowest percentage of water in the oil stream. The simulation of the three-phase separator was performed using Aspen Hysys®2006 simulation software in stationary mode, and the evaluation of the factorial experimental designs was performed using the software Statistica®. From the general analysis of the four normal probability plots of effects obtained, it was observed that interaction effects of two and three factors did not show statistical significance at 95% confidence, since all the values were very close to zero. Similarly, the main effect "separator type" did not show significant statistical influence in any situation. As in this case, it has been assumed that the volumetric flow of water, oil and gas were equal in the inlet stream, the effect separator type, in fact, may not be significant for the proposed system. Nevertheless, the main effect “temperature” was significant for both responses (oil flow rate and mass fraction of water in the oil stream), considering both light and heavy oil, so that the best operation condition occurs with the temperature at its lowest level (30oC), since the higher the temperature, the liquid oil components pass into the vapor phase, going to the gas stream. Furthermore, the higher the temperature, the higher the formation water vapor, so that ends up going into the lighter stream (oil stream), making the separation process more difficult. Regarding the “working pressure”, this effect showed to be significant only for the oil flow rate, so that the best operation condition occurs with the pressure at its highest level (9bar), since a higher operating pressure, in this case, indicated a lower pressure drop inside the vessel, generating lower level of turbulence inside the separator. In conclusion, the best-operating condition obtained for the proposed system, at the studied range, occurs for temperature is at its lowest level and the working pressure is at its highest level.Keywords: factorial experimental design, oil production, simulation, three-phase separator
Procedia PDF Downloads 290318 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 188317 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition
Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi
Abstract:
Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.Keywords: porous medium, power law fluids, surface heat flux, vertical wedge
Procedia PDF Downloads 312316 Self-Assembling Layered Double Hydroxide Nanosheets on β-FeOOH Nanorods for Reducing Fire Hazards of Epoxy Resin
Abstract:
Epoxy resins (EP), one of the most important thermosetting polymers, is widely applied in various fields due to its desirable properties, such as excellent electrical insulation, low shrinkage, outstanding mechanical stiffness, satisfactory adhesion and solvent resistance. However, like most of the polymeric materials, EP has the fatal drawbacks including inherent flammability and high yield of toxic smoke, which restricts its application in the fields requiring fire safety. So, it is still a challenge and an interesting subject to develop new flame retardants which can not only remarkably improve the flame retardancy, but also render modified resins low toxic gases generation. In recent work, polymer nanocomposites based on nanohybrids that contain two or more kinds of nanofillers have drawn intensive interest, which can realize performance enhancements. The realization of previous hybrids of carbon nanotubes (CNTs) and molybdenum disulfide provides us a novel route to decorate layered double hydroxide (LDH) nanosheets on the surface of β-FeOOH nanorods; the deposited LDH nanosheets can fill the network and promote the work efficiency of β-FeOOH nanorods. Moreover, the synergistic effects between LDH and β-FeOOH can be anticipated to have potential applications in reducing fire hazards of EP composites for the combination of condense-phase and gas-phase mechanism. As reported, β-FeOOH nanorods can act as a core to prepare hybrid nanostructures combining with other nanoparticles through electrostatic attraction through layer-by-layer assembly technique. In this work, LDH nanosheets wrapped β-FeOOH nanorods (LDH-β-FeOOH) hybrids was synthesized by a facile method, with the purpose of combining the characteristics of one dimension (1D) and two dimension (2D), to improve the fire resistance of epoxy resin. The hybrids showed a well dispersion in EP matrix and had no obvious aggregation. Thermogravimetric analysis and cone calorimeter tests confirmed that LDH-β-FeOOH hybrids into EP matrix with a loading of 3% could obviously improve the fire safety of EP composites. The plausible flame retardancy mechanism was explored by thermogravimetric infrared (TG-IR) and X-ray photoelectron spectroscopy. The reasons were concluded: condense-phase and gas-phase. Nanofillers were transferred to the surface of matrix during combustion, which could not only shield EP matrix from external radiation and heat feedback from the fire zone, but also efficiently retard transport of oxygen and flammable pyrolysis.Keywords: fire hazards, toxic gases, self-assembly, epoxy
Procedia PDF Downloads 174315 Factors Associated with Hand Functional Disability in People with Rheumatoid Arthritis: A Systematic Review and Best-Evidence Synthesis
Authors: Hisham Arab Alkabeya, A. M. Hughes, J. Adams
Abstract:
Background: People with Rheumatoid Arthritis (RA) continue to experience problems with hand function despite new drug advances and targeted medical treatment. Consequently, it is important to identify the factors that influence the impact of RA disease on hand function. This systematic review identified observational studies that reported factors that influenced the impact of RA on hand function. Methods: MEDLINE, EMBASE, CINAL, AMED, PsychINFO, and Web of Science database were searched from January 1990 up to March 2017. Full-text articles published in English that described factors related to hand functional disability in people with RA were selected following predetermined inclusion and exclusion criteria. Pertinent data were thoroughly extracted and documented using a pre-designed data extraction form by the lead author, and cross-checked by the review team for completion and accuracy. Factors related to hand function were classified under the domains of the International Classification of Functioning, Disability, and Health (ICF) framework and health-related factors. Three reviewers independently assessed the methodological quality of the included articles using the quality of cross-sectional studies (AXIS) tool. Factors related to hand function that was investigated in two or more studies were explored using a best-evidence synthesis. Results: Twenty articles form 19 studies met the inclusion criteria from 1,271 citations; all presented cross-sectional data (five high quality and 15 low quality studies), resulting in at best limited evidence in the best-evidence synthesis. For the factors classified under the ICF domains, the best-evidence synthesis indicates that there was a range of body structure and function factors that were related with hand functional disability. However, key factors were hand strength, disease activity, and pain intensity. Low functional status (physical, emotional and social) level was found to be related with limited hand function. For personal factors, there is limited evidence that gender is not related with hand function; whereas, conflicting evidence was found regarding the relationship between age and hand function. In the domain of environmental factors, there was limited evidence that work activity was not related with hand function. Regarding health-related factors, there was limited evidence that the level of the rheumatoid factor (RF) was not related to hand function. Finally, conflicting evidence was found regarding the relationship between hand function and disease duration and general health status. Conclusion: Studies focused on body structure and function factors, highlighting a lack of investigation into personal and environmental factors when considering the impact of RA on hand function. The level of evidence which exists was limited, but identified that modifiable factors such as grip or pinch strength, disease activity and pain are the most influential factors on hand function in people with RA. The review findings suggest that important personal and environmental factors that impact on hand function in people with RA are not yet considered or reported in clinical research. Well-designed longitudinal, preferably cohort, studies are now needed to better understand the causality between personal and environmental factors and hand functional disability in people with RA.Keywords: factors, hand function, rheumatoid arthritis, systematic review
Procedia PDF Downloads 150314 Integrated Passive Cooling Systems for Tropical Residential Buildings: A Review through the Lens of Latent Heat Assessment
Authors: O. Eso, M. Mohammadi, J. Darkwa, J. Calautit
Abstract:
Residential buildings are responsible for 22% of the global end-use energy demand and 17% of global CO₂ emissions. Tropical climates particularly present higher latent heat gains, leading to more cooling loads. However, the cooling processes are all based on conventional mechanical air conditioning systems which are energy and carbon intensive technologies. Passive cooling systems have in the past been considered as alternative technologies for minimizing energy consumption in buildings. Nevertheless, replacing mechanical cooling systems with passive ones will require a careful assessment of the passive cooling system heat transfer to determine if suitable to outperform their conventional counterparts. This is because internal heat gains, indoor-outdoor heat transfer, and heat transfer through envelope affects the performance of passive cooling systems. While many studies have investigated sensible heat transfer in passive cooling systems, not many studies have focused on their latent heat transfer capabilities. Furthermore, combining heat prevention, heat modulation and heat dissipation to passively cool indoor spaces in the tropical climates is critical to achieve thermal comfort. Since passive cooling systems use only one of these three approaches at a time, integrating more than one passive cooling system for effective indoor latent heat removal while still saving energy is studied. This study is a systematic review of recently published peer review journals on integrated passive cooling systems for tropical residential buildings. The missing links in the experimental and numerical studies with regards to latent heat reduction interventions are presented. Energy simulation studies of integrated passive cooling systems in tropical residential buildings are also discussed. The review has shown that comfortable indoor environment is attainable when two or more passive cooling systems are integrated in tropical residential buildings. Improvement occurs in the heat transfer rate and cooling performance of the passive cooling systems when thermal energy storage systems like phase change materials are included. Integrating passive cooling systems in tropical residential buildings can reduce energy consumption by 6-87% while achieving up to 17.55% reduction in indoor heat flux. The review has highlighted a lack of numerical studies regarding passive cooling system performance in tropical savannah climates. In addition, detailed studies are required to establish suitable latent heat transfer rate in passive cooling ventilation devices under this climate category. This should be considered in subsequent studies. The conclusions and outcomes of this study will help researchers understand the overall energy performance of integrated passive cooling systems in tropical climates and help them identify and design suitable climate specific options for residential buildings.Keywords: energy savings, latent heat, passive cooling systems, residential buildings, tropical residential buildings
Procedia PDF Downloads 149313 Cartilage Mimicking Coatings to Increase the Life-Span of Bearing Surfaces in Joint Prosthesis
Authors: L. Sánchez-Abella, I. Loinaz, H-J. Grande, D. Dupin
Abstract:
Aseptic loosening remains as the principal cause of revision in total hip arthroplasty (THA). For long-term implantations, submicron particles are generated in vivo due to the inherent wear of the prosthesis. When this occurs, macrophages undergo phagocytosis and secretion of bone resorptive cytokines inducing osteolysis, hence loosening of the implanted prosthesis. Therefore, new technologies are required to reduce the wear of the bearing materials and hence increase the life-span of the prosthesis. Our strategy focuses on surface modification of the bearing materials with a hydrophilic coating based on cross-linked water-soluble (meth)acrylic monomers to improve their tribological behavior. These coatings are biocompatible, with high swelling capacity and antifouling properties, mimicking the properties of natural cartilage, i.e. wear resistance with a permanent hydrated layer that prevents prosthesis damage. Cartilage mimicking based coatings may be also used to protect medical device surfaces from damage and scratches that will compromise their integrity and hence their safety. However, there are only a few reports on the mechanical and tribological characteristics of this type of coatings. Clear beneficial advantages of this coating have been demonstrated in different conditions and different materials, such as Ultra-high molecular weight polyethylene (UHMWPE), Polyethylene (XLPE), Carbon-fiber-reinforced polyetheretherketone (CFR-PEEK), cobalt-chromium (CoCr), Stainless steel, Zirconia Toughened Alumina (ZTA) and Alumina. Using routine tribological experiments, the wear for UHMWPE substrate was decreased by 75% against alumina, ZTA and stainless steel. For PEEK-CFR substrate coated, the amount of material lost against ZTA and CrCo was at least 40% lower. Experiments on hip simulator allowed coated ZTA femoral heads and coated UHMWPE cups to be validated with a decrease of 80% of loss material. Further experiments on hip simulator adding abrasive particles (1 micron sized alumina particles) during 3 million cycles, on a total of 6 million, demonstrated a decreased of around 55% of wear compared to uncoated UHMWPE and uncoated XLPE. In conclusion, CIDETEC‘s hydrogel coating technology is versatile and can be adapted to protect a large range of surfaces, even in abrasive conditions.Keywords: cartilage, hydrogel, hydrophilic coating, joint
Procedia PDF Downloads 119312 Conservation Agriculture under Mediterranean Climate: Effects on below and Above-Ground Processes during Wheat Cultivation
Authors: Vasiliki Kolake, Christos Kavalaris, Sofia Megoudi, Maria Maxouri, Panagiotis A. Karas, Aris Kyparissis, Efi Levizou
Abstract:
Conservation agriculture (CA), is a production system approach that can tackle the challenges of climate change mainly through facilitating carbon storage into the soil and increasing crop resilience. This is extremely important for the vulnerable Mediterranean agroecosystems, which already face adverse environmental conditions. The agronomic practices used in CA, i.e. permanent soil cover and no-tillage, result in reduced soil erosion and increased soil organic matter, preservation of water and improvement of quality and fertility of the soil in the long-term. Thus the functional characteristics and processes of the soil are considerably affected by the implementation of CA. The aim of the present work was to assess the effects of CA on soil nitrification potential and mycorrhizal colonization about the above-ground production in a wheat field. Two adjacent but independent field sites of 1.5ha each were used (Thessaly plain, Central Greece), comprising the no-till and conventional tillage treatments. The no-tillage site was covered by residues of the previous crop (cotton). Potential nitrification and the nitrate and ammonium content of the soil were measured at two different soil depths (3 and 15cm) at 20-days intervals throughout the growth period. Additionally, the leaf area index (LAI) was monitored at the same time-course. The mycorrhizal colonization was measured at the commencement and end of the experiment. At the final harvest, total yield and plant biomass were also recorded. The results indicate that wheat yield was considerably favored by CA practices, exhibiting a 42% increase compared to the conventional tillage treatment. The superior performance of the CA crop was also depicted in the above-ground plant biomass, where a 26% increase was recorded. LAI, which is considered a reliable growth index, did not show statistically significant differences between treatments throughout the growth period. On the contrary, significant differences were recorded in endomycorrhizal colonization one day before the final harvest, with CA plants exhibiting 20% colonization, while the conventional tillage plants hardly reached 1%. The on-going analyses of potential nitrification measurements, as well as nitrate and ammonium determination, will shed light on the effects of CA on key processes in the soil. These results will integrate the assessment of CA impact on certain below and above-ground processes during wheat cultivation under the Mediterranean climate.Keywords: conservation agriculture, LAI, mycorrhizal colonization, potential nitrification, wheat, yield
Procedia PDF Downloads 133311 Enhancement of Fracture Toughness for Low-Temperature Applications in Mild Steel Weldments
Authors: Manjinder Singh, Jasvinder Singh
Abstract:
Existing theories of Titanic/Liberty ship, Sydney bridge accidents and practical experience generated an interest in developing weldments those has high toughness under sub-zero temperature conditions. The purpose was to protect the joint from undergoing DBT (Ductile to brittle transition), when ambient temperature reach sub-zero levels. Metallurgical improvement such as low carbonization or addition of deoxidization elements like Mn and Si was effective to prevent fracture in weldments (crack) at low temperature. In the present research, an attempt has been made to investigate the reason behind ductile to brittle transition of mild steel weldments when subjected to sub-zero temperatures and method of its mitigation. Nickel is added to weldments using manual metal arc welding (MMAW) preventing the DBT, but progressive reduction in charpy impact values as temperature is lowered. The variation in toughness with respect to nickel content being added to the weld pool is analyzed quantitatively to evaluate the rise in toughness value with increasing nickel amount. The impact performance of welded specimens was evaluated by Charpy V-notch impact tests at various temperatures (20 °C, 0 °C, -20 °C, -40 °C, -60 °C). Notch is made in the weldments, as notch sensitive failure is particularly likely to occur at zones of high stress concentration caused by a notch. Then the effect of nickel to weldments is investigated at various temperatures was studied by mechanical and metallurgical tests. It was noted that a large gain in impact toughness could be achieved by adding nickel content. The highest yield strength (462J) in combination with good impact toughness (over 220J at – 60 °C) was achieved with an alloying content of 16 wt. %nickel. Based on metallurgical behavior it was concluded that the weld metals solidify as austenite with increase in nickel. The microstructure was characterized using optical and high resolution SEM (scanning electron microscopy). At inter-dendritic regions mainly martensite was found. In dendrite core regions of the low carbon weld metals a mixture of upper bainite, lower bainite and a novel constituent coalesced bainite formed. Coalesced bainite was characterized by large bainitic ferrite grains with cementite precipitates and is believed to form when the bainite and martensite start temperatures are close to each other. Mechanical properties could be rationalized in terms of micro structural constituents as a function of nickel content.Keywords: MMAW, Toughness, DBT, Notch, SEM, Coalesced bainite
Procedia PDF Downloads 526310 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 134309 Hydrogen Induced Fatigue Crack Growth in Pipeline Steel API 5L X65: A Combined Experimental and Modelling Approach
Authors: H. M. Ferreira, H. Cockings, D. F. Gordon
Abstract:
Climate change is driving a transition in the energy sector, with low-carbon energy sources such as hydrogen (H2) emerging as an alternative to fossil fuels. However, the successful implementation of a hydrogen economy requires an expansion of hydrogen production, transportation and storage capacity. The costs associated with this transition are high but can be partly mitigated by adapting the current oil and natural gas networks, such as pipeline, an important component of the hydrogen infrastructure, to transport pure or blended hydrogen. Steel pipelines are designed to withstand fatigue, one of the most common causes of pipeline failure. However, it is well established that some materials, such as steel, can fail prematurely in service when exposed to hydrogen-rich environments. Therefore, it is imperative to evaluate how defects (e.g. inclusions, dents, and pre-existing cracks) will interact with hydrogen under cyclic loading and, ultimately, to what extent hydrogen induced failure will limit the service conditions of steel pipelines. This presentation will explore how the exposure of API 5L X65 to a hydrogen-rich environment and cyclic loads will influence its susceptibility to hydrogen induced failure. That evaluation will be performed by a combination of several techniques such as hydrogen permeation testing (ISO 17081:2014), fatigue crack growth (FCG) testing (ISO 12108:2018 and AFGROW modelling), combined with microstructural and fractographic analysis. The development of a FCG test setup coupled with an electrochemical cell will be discussed, along with the advantages and challenges of measuring crack growth rates in electrolytic hydrogen environments. A detailed assessment of several electrolytic charging conditions will also be presented, using hydrogen permeation testing as a method to correlate the different charging settings to equivalent hydrogen concentrations and effective diffusivity coefficients, not only on the base material but also on the heat affected zone and weld of the pipelines. The experimental work is being complemented with AFGROW, a useful FCG modelling software that has helped inform testing parameters and which will also be developed to ultimately help industry experts perform structural integrity analysis and remnant life characterisation of pipeline steels under representative conditions. The results from this research will allow to conclude if there is an acceleration of the crack growth rate of API 5L X65 under the influence of a hydrogen-rich environment, an important aspect that needs to be rectified instandards and codes of practice on pipeline integrity evaluation and maintenance.Keywords: AFGROW, electrolytic hydrogen charging, fatigue crack growth, hydrogen, pipeline, steel
Procedia PDF Downloads 106308 The Influence of Salt Body of J. Ech Cheid on the Maturity History of the Cenomanian: Turonian Source Rock
Authors: Mohamed Malek Khenissi, Mohamed Montassar Ben Slama, Anis Belhaj Mohamed, Moncef Saidi
Abstract:
Northern Tunisia is well known by its different and complex structural and geological zones that have been the result of a geodynamic history that extends from the early Mesozoic era to the actual period. One of these zones is the salt province, where the Halokinesis process is manifested by a number of NE/SW salt structures such as Jebel Ech-Cheid which represents masses of materials characterized by a high plasticity and low density. The salt masses extrusions that have been developed due to an extension that started from the late Triassic to late Cretaceous. The evolution of salt bodies within sedimentary basins have not only contributed to modify the architecture of the basin, but it also has certain geochemical effects which touch mainly source rocks that surround it. It has been demonstrated that the presence of salt structures within sedimentary basins can influence its temperature distribution and thermal history. Moreover, it has been creating heat flux anomalies that may affect the maturity of organic matter and the timing of hydrocarbon generation. Field samples of the Bahloul source rock (Cenomanan-Tunonian) were collected from different sights from all around Ech Cheid salt structure and evaluated using Rock-eval pyrolysis and GC/MS techniques in order to assess the degree of maturity evolution and the heat flux anomalies in the different zones analyze. The Total organic Carbon (TOC) values range between 1 to 9% and the (Tmax) ranges between 424 and 445°C, also the distribution of the source rock biomarkers both saturated and aromatic changes in a regular fashions with increasing maturity and this are shown in the chromatography results such as Ts/(Ts+Tm) ratios, 22S/(22S+22R) values for C31 homohopanes, ββ/(ββ+αα)20R and 20S/(20S+20R) ratios for C29 steranes which gives a consistent maturity indications and assessment of the field samples. These analyses are carried to interpret the maturity evolution and the heat flux around Ech Cheid salt structure through the geological history. These analyses also aim to demonstrate that the salt structure can have a direct effect on the geothermal gradient of the basin and on the maturity of the Bahloul Formation source rock. The organic matter has reached different stages of thermal maturity, but delineate a general increasing maturity trend. Our study confirms that the J. Ech Cheid salt body have on the first hand: a huge influence on the local distribution of anoxic depocentre at least within Cenomanian-Turonian time. In the second hand, the thermal anomaly near the salt mass has affected the maturity of Bahloul Formation.Keywords: Bahloul formation, depocentre, GC/MS, rock-eval
Procedia PDF Downloads 241307 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach
Authors: Razaq A. Olaitan, Ayansina Ayanlade
Abstract:
Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.Keywords: aerosol loading, aerosol types, health risks, optical properties
Procedia PDF Downloads 64306 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis
Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin
Abstract:
With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism
Procedia PDF Downloads 266305 [Keynote Talk]: Monitoring of Ultrafine Particle Number and Size Distribution at One Urban Background Site in Leicester
Authors: Sarkawt M. Hama, Paul S. Monks, Rebecca L. Cordell
Abstract:
Within the Joaquin project, ultrafine particles (UFP) are continuously measured at one urban background site in Leicester. The main aims are to examine the temporal and seasonal variations in UFP number concentration and size distribution in an urban environment, and to try to assess the added value of continuous UFP measurements. In addition, relations of UFP with more commonly monitored pollutants such as black carbon (BC), nitrogen oxides (NOX), particulate matter (PM2.5), and the lung deposited surface area(LDSA) were evaluated. The effects of meteorological conditions, particularly wind speed and direction, and also temperature on the observed distribution of ultrafine particles will be detailed. The study presents the results from an experimental investigation into the particle number concentration size distribution of UFP, BC, and NOX with measurements taken at the Automatic Urban and Rural Network (AURN) monitoring site in Leicester. The monitoring was performed as part of the EU project JOAQUIN (Joint Air Quality Initiative) supported by the INTERREG IVB NWE program. The total number concentrations (TNC) were measured by a water-based condensation particle counter (W-CPC) (TSI model 3783), the particle number concentrations (PNC) and size distributions were measured by an ultrafine particle monitor (UFP TSI model 3031), the BC by MAAP (Thermo-5012), the NOX by NO-NO2-NOx monitor (Thermos Scientific 42i), and a Nanoparticle Surface Area Monitor (NSAM, TSI 3550) was used to measure the LDSA (reported as μm2 cm−3) corresponding to the alveolar region of the lung between November 2013 and November 2015. The average concentrations of particle number concentrations were observed in summer with lower absolute values of PNC than in winter might be related mainly to particles directly emitted by traffic and to the more favorable conditions of atmospheric dispersion. Results showed a traffic-related diurnal variation of UFP, BC, NOX and LDSA with clear morning and evening rush hour peaks on weekdays, only an evening peak at the weekends. Correlation coefficients were calculated between UFP and other pollutants (BC and NOX). The highest correlation between them was found in winter months. Overall, the results support the notion that local traffic emissions were a major contributor of the atmospheric particles pollution and a clear seasonal pattern was found, with higher values during the cold season.Keywords: size distribution, traffic emissions, UFP, urban area
Procedia PDF Downloads 330304 Performance of the Abbott RealTime High Risk HPV Assay with SurePath Liquid Based Cytology Specimens from Women with Low Grade Cytological Abnormalities
Authors: Alexandra Sargent, Sarah Ferris, Ioannis Theofanous
Abstract:
The Abbott RealTime High Risk HPV test (RealTime HPV) is one of five assays clinically validated and approved by the English NHS Cervical Screening Programme (CSP) for HPV triage of low grade dyskaryosis and test-of-cure of treated Cervical Intraepithelial Neoplasia. The assay is a highly automated multiplex real-time PCR test for detecting 14 high risk (hr) HPV types, with simultaneous differentiation of HPV 16 and HPV 18 versus non-HPV 16/18 hrHPV. An endogenous internal control ensures sample cellularity, controls extraction efficiency and PCR inhibition. The original cervical specimen collected in SurePath (SP) liquid-based cytology (LBC) medium (BD Diagnostics) and the SP post-gradient cell pellets (SPG) after cytological processing are both CE marked for testing with the RealTime HPV test. During the 2011 NHSCSP validation of new tests only the original aliquot of SP LBC medium was investigated. Residual sample volume left after cytology slide preparation is low and may not always have sufficient volume for repeat HPV testing or for testing of other biomarkers that may be implemented in testing algorithms in the future. The SPG samples, however, have sufficient volumes to carry out additional testing and necessary laboratory validation procedures. This study investigates the correlation of RealTime HPV results of cervical specimens collected in SP LBC medium from women with low grade cytological abnormalities observed with matched pairs of original SP LBC medium and SP post-gradient cell pellets (SPG) after cytology processing. Matched pairs of SP and SPG samples from 750 women with borderline (N = 392) and mild (N = 351) cytology were available for this study. Both specimen types were processed and parallel tested for the presence of hrHPV with RealTime HPV according to the manufacturer´s instructions. HrHPV detection rates and concordance between test results from matched SP and SPGCP pairs were calculated. A total of 743 matched pairs with valid test results on both sample types were available for analysis. An overall-agreement of hrHPV test results of 97.5% (k: 0.95) was found with matched SP/SPG pairs and slightly lower concordance (96.9%; k: 0.94) was observed on 392 pairs from women with borderline cytology compared to 351 pairs from women with mild cytology (98.0%; k: 0.95). Partial typing results were highly concordant in matched SP/SPG pairs for HPV 16 (99.1%), HPV 18 (99.7%) and non-HPV16/18 hrHPV (97.0%), respectively. 19 matched pairs were found with discrepant results: 9 from women with borderline cytology and 4 from women with mild cytology were negative on SPG and positive on SP; 3 from women with borderline cytology and 3 from women with mild cytology were negative on SP and positive on SPG. Excellent correlation of hrHPV DNA test results was found between matched pairs of SP original fluid and post-gradient cell pellets from women with low grade cytological abnormalities tested with the Abbott RealTime High-Risk HPV assay, demonstrating robust performance of the test with both specimen types and reassuring the utility of the assay for cytology triage with both specimen types.Keywords: Abbott realtime test, HPV, SurePath liquid based cytology, surepath post-gradient cell pellet
Procedia PDF Downloads 259303 Antimicrobial Properties of SEBS Compounds with Copper Microparticles
Authors: Vanda Ferreira Ribeiro, Daiane Tomacheski, Douglas Naue Simões, Michele Pitto, Ruth Marlene Campomanes Santana
Abstract:
Indoor environments, such as car cabins and public transportation vehicles are places where users are subject to air quality. Microorganisms (bacteria, fungi, yeasts) enter these environments through windows, ventilation systems and may use the organic particles present as a growth substrate. In addition, atmospheric pollutants can act as potential carbon and nitrogen sources for some microorganisms. Compounds base SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPEs), fully recyclable and largely used in automotive parts. Metals, such as cooper and silver, have biocidal activities and the production of the SEBS compounds by melting blending with these agents can be a good option for producing compounds for use in plastic parts of ventilation systems and automotive air-conditioning, in order to minimize the problems caused by growth of pathogenic microorganisms. In this sense, the aim of this work was to evaluate the effect of copper microparticles as antimicrobial agent in compositions based on SEBS/PP/oil/calcite. Copper microparticles were used in weight proportion of 0%, 1%, 2% and 4%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The processing parameters were 300 rpm of screw rotation rate, with a temperature profile between 150 to 190°C. SEBS based TPE compounds were injection molded. The compounds emission were characterized by gravimetric fogging test. Compounds were characterized by physical (density and staining by contact), mechanical (hardness and tension properties) and rheological properties (melt volume rate – MVR). Antibacterial properties were evaluated against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli) strains. To avaluate the abilities toward the fungi have been chosen Aspergillus niger (A. niger), Candida albicans (C. albicans), Cladosporium cladosporioides (C. cladosporioides) and Penicillium chrysogenum (P. chrysogenum). The results of biological tests showed a reduction on bacteria in up to 88% in E.coli and up to 93% in S. aureus. The tests with fungi showed no conclusive results because the sample without copper also demonstrated inhibition of the development of these microorganisms. The copper addition did not cause significant variations in mechanical properties, in the MVR and the emission behavior of the compounds. The density increases with the increment of copper in compounds.Keywords: air conditioner, antimicrobial, cooper, SEBS
Procedia PDF Downloads 283302 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique
Authors: Harpal Singh, Sakshi Batra
Abstract:
The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.Keywords: discrete wavelet transform, robustness, video watermarking, watermark
Procedia PDF Downloads 225301 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 70300 Ankle Fracture Management: A Unique Cross Departmental Quality Improvement Project
Authors: Langhit Kurar, Loren Charles
Abstract:
Introduction: In light of recent BOAST 12 (August 2016) published guidance on management of ankle fractures, the project aimed to highlight key discrepancies throughout the care trajectory from admission to point of discharge at a district general hospital. Wide breadth of data covering three key domains: accident and emergency, radiology, and orthopaedic surgery were subsequently stratified and recommendations on note documentation, and outpatient follow up were made. Methods: A retrospective twelve month audit was conducted reviewing results of ankle fracture management in 37 patients. Inclusion criterion involved all patients seen at Darent Valley Hospital (DVH) emergency department with radiographic evidence of an ankle fracture. Exclusion criterion involved all patients managed solely by nursing staff or having sustained purely ligamentous injury. Medical notes, including discharge summaries and the PACS online radiographic tool were used for data extraction. Results: Cross-examination of the A & E domain revealed limited awareness of the BOAST 12 recent publication including requirements to document skin integrity and neurovascular assessment. This had direct implications as this would have changed the surgical plan for acutely compromised patients. The majority of results obtained from the radiographic domain were satisfactory with appropriate X-rays taken in over 95% of cases. However, due to time pressures within A & E, patients were often left without a post manipulation XRAY in a backslab. Poorly reduced fractures were subsequently left for a long period resulting in swollen ankles and a time-dependent lag to surgical intervention. This had knocked on implications for prolonged inpatient stay resulting in hospital-acquired co-morbidity including pressure sores. Discussion: The audit has highlighted several areas of improvement throughout the disease trajectory from review in the emergency department to follow up as an outpatient. This has prompted the creation of an algorithm to ensure patients with significant fractures presenting to the emergency department are seen promptly and treatment expedited as per recent guidance. This includes timing for X-rays taken in A & E. Re-audit has shown significant improvement in both documentation at time of presentation and appropriate follow-up strategies. Within the orthopedic domain, we are in the process of creating an ankle fracture pathway to ensure imaging and weight bearing status are made clear to the consulting clinicians in an outpatient setting. Significance/Clinical Relevance: As a result of the ankle fracture algorithm we have adapted the BOAST 12 guidance to shape an intrinsic pathway to not only improve patient management within the emergency department but also create a standardised format for follow up.Keywords: ankle, fracture, BOAST, radiology
Procedia PDF Downloads 180299 Optimization of Artisanal Fishing Waste Fermentation for Volatile Fatty Acids Production
Authors: Luz Stella Cadavid-Rodriguez, Viviana E. Castro-Lopez
Abstract:
Fish waste (FW) has a high content of potentially biodegradable components, so it is amenable to be digested anaerobically. In this line, anaerobic digestion (AD) of FW has been studied for biogas production. Nevertheless, intermediate products such as volatile fatty acids (VFA), generated during the acidogenic stage, have been scarce investigated, even though they have a high potential as a renewable source of carbon. In the literature, there are few studies about the Inoculum-Substrate (I/S) ratio on acidogenesis. On the other hand, it is well known that pH is a critical factor in the production of VFA. The optimum pH for the production of VFA seems to change depending on the substrate and can vary in a range between 5.25 and 11. Nonetheless, the literature about VFA production from protein-rich waste, such as FW, is scarce. In this context, it is necessary to deepen on the determination of the optimal operating conditions of acidogenic fermentation for VFA production from protein-rich waste. Therefore, the aim of this research was to optimize the volatile fatty acid production from artisanal fishing waste, studying the effect of pH and the I/S ratio on the acidogenic process. For this research, the inoculum used was a methanogenic sludge (MS) obtained from a UASB reactor treating wastewater of a slaughterhouse plant, and the FW was collected in the port of Tumaco (Colombia) from the local artisanal fishers. The acidogenic fermentation experiments were conducted in batch mode, in 500 mL glass bottles as anaerobic reactors, equipped with rubber stoppers provided with a valve to release biogas. The effective volume used was 300 mL. The experiments were carried out for 15 days at a mesophilic temperature of 37± 2 °C and constant agitation of 200 rpm. The effect of 3 pH levels: 5, 7, 9, coupled with five I/S ratios, corresponding to 0.20, 0.15, 0.10, 0.05, 0.00 was evaluated taking as a response variable the production of VFA. A complete randomized block design was selected for the experiments in a 5x3 factorial arrangement, with two repetitions per treatment. At the beginning and during the process, pH in the experimental reactors was adjusted to the corresponding values of 5, 7, and 9 using 1M NaOH or 1M H2SO4, as was appropriated. In addition, once the optimum I/S ratio was determined, the process was evaluated at this condition without pH control. The results indicated that pH is the main factor in the production of VFA, obtaining the highest concentration with neutral pH. By reducing the I/S ratio, as low as 0.05, it was possible to maximize VFA production. Thus, the optimum conditions found were natural pH (6.6-7.7) and I/S ratio of 0.05, with which it was possible to reach a maximum total VFA concentration of 70.3 g Ac/L, whose major components were acetic acid (35%) and butyric acid (32%). The findings showed that the acidogenic fermentation of FW is an efficient way of producing VFA and that the operating conditions can be simple and economical.Keywords: acidogenesis, artisanal fishing waste, inoculum to substrate ratio, volatile fatty acids
Procedia PDF Downloads 126298 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction
Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj
Abstract:
Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran
Procedia PDF Downloads 147