Search results for: skinfold measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2835

Search results for: skinfold measurements

135 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet

Authors: Zeradam Yeshiwas, A. Krishnaia

Abstract:

The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).

Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior

Procedia PDF Downloads 62
134 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 70
133 Eosinophils and Platelets: Players of the Game in Morbid Obese Boys with Metabolic Syndrome

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Childhood obesity, which may lead to increased risk for heart diseases in children as well as adults, is one of the most important health problems throughout the world. Prevalences of morbid obesity and metabolic syndrome (MetS) are being increased during childhood age group. MetS is a cluster of metabolic and vascular abnormalities including hypercoagulability and an increased risk of cardiovascular diseases (CVDs). There are also some relations between some components of MetS and leukocytes. The aim of this study is to investigate complete blood cell count parameters that differ between morbidly obese boys and girls with MetS diagnosis. A total of 117 morbid obese children with MetS consulted to Department of Pediatrics in Faculty of Medicine Hospital at Namik Kemal University were included into the scope of the study. The study population was classified based upon their genders (60 girls and 57 boys). Their heights and weights were measured and body mass index (BMI) values were calculated. WHO BMI-for age and sex percentiles were used. The values above 99 percentile were defined as morbid obesity. Anthropometric measurements were performed. Waist-to-hip and head-to-neck ratios as well as homeostatic model assessment of insulin resistance (HOMA-IR) were calculated. Components of MetS (central obesity, glucose intolerance, high blood pressure, high triacylglycerol levels, low levels of high density lipoprotein cholesterol) were determined. Hematological variables were measured. Statistical analyses were performed using SPSS. The degree for statistical significance was p ≤ 0.05. There was no statistically significant difference between the ages (11.2±2.6 years vs 11.2±3.0 years) and BMIs (28.6±5.2 kg/m2 vs 29.3±5.2 kg/m2) of boys and girls (p ≥ 0.05), respectively. Significantly increased waist-to-hip ratios were obtained for boys (0.94±0.08 vs 0.91±0.06; p=0.023). Significantly elevated values of hemoglobin (13.55±0.98 vs 13.06±0.82; p=0.004), mean corpuscular hemoglobin concentration (33.79±0.91 vs 33.21±1.14; p=0.003), eosinophils (0.300±0.253 vs 0.196±0.197; p=0.014), and platelet (347.1±81.7 vs 319.0±65.9; p=0.042) were detected for boys. There was no statistically significant difference between the groups in terms of neutrophil/lymphocyte ratios as well as HOMA-IR values (p ≥ 0.05). Statistically significant gender-based differences were found for hemoglobin as well as mean corpuscular hemoglobin concentration and hence, separate reference intervals for two genders should be considered for these parameters. Eosinophils may contribute to the development of thrombus in acute coronary syndrome. Eosinophils are also known to make an important contribution to mechanisms related to thrombosis pathogenesis in acute myocardial infarction. Increased platelet activity is observed in patients with MetS and these individuals are more susceptible to CVDs. In our study, elevated platelets described as dominant contributors to hypercoagulability and elevated eosinophil counts suggested to be related to the development of CVDs observed in boys may be the early indicators of the future cardiometabolic complications in this gender.

Keywords: children, complete blood count, gender, metabolic syndrome

Procedia PDF Downloads 217
132 Characterization of Surface Microstructures on Bio-Based PLA Fabricated with Nano-Imprint Lithography

Authors: D. Bikiaris, M. Nerantzaki, I. Koliakou, A. Francone, N. Kehagias

Abstract:

In the present study, the formation of structures in poly(lactic acid) (PLA) has been investigated with respect to producing areas of regular, superficial features with dimensions comparable to those of cells or biological macromolecules. Nanoimprint lithography, a method of pattern replication in polymers, has been used for the production of features ranging from tens of micrometers, covering areas up to 1 cm², down to hundreds of nanometers. Both micro- and nano-structures were faithfully replicated. Potentially, PLA has wide uses within biomedical fields, from implantable medical devices, including screws and pins, to membrane applications, such as wound covers, and even as an injectable polymer for, for example, lipoatrophy. The possibility of fabricating structured PLA surfaces, with structures of the dimensions associated with cells or biological macro- molecules, is of interest in fields such as cellular engineering. Imprint-based technologies have demonstrated the ability to selectively imprint polymer films over large areas resulting in 3D imprints over flat, curved or pre-patterned surfaces. Here, we compare nano-patterned with nano-patterned by nanoimprint lithography (NIL) PLA film. A silicon nanostructured stamp (provided by Nanotypos company) having positive and negative protrusions was used to pattern PLA films by means of thermal NIL. The polymer film was heated from 40°C to 60°C above its Tg and embossed with a pressure of 60 bars for 3 min. The stamp and substrate were demolded at room temperature. Scanning electron microscope (SEM) images showed good replication fidelity of the replicated Si stamp. Contact-angle measurements suggested that positive microstructuring of the polymer (where features protrude from the polymer surface) produced a more hydrophilic surface than negative micro-structuring. The ability to structure the surface of the poly(lactic acid), allied to the polymer’s post-processing transparency and proven biocompatibility. Films produced in this were also shown to enhance the aligned attachment behavior and proliferation of Wharton’s Jelly Mesenchymal Stem cells, leading to the observed growth contact guidance. The bacterial attachment patterns of some bacteria, highlighted that the nano-patterned PLA structure can reduce the propensity for the bacteria to attach to the surface, with a greater bactericidal being demonstrated activity against the Staphylococcus aureus cells. These biocompatible, micro- and nanopatterned PLA surfaces could be useful for polymer– cell interaction experiments at dimensions at, or below, that of individual cells. Indeed, post-fabrication modification of the microstructured PLA surface, with materials such as collagen (which can further reduce the hydrophobicity of the surface), will extend the range of applications, possibly through the use of PLA’s inherent biodegradability. Further study is being undertaken to examine whether these structures promote cell growth on the polymer surface.

Keywords: poly(lactic acid), nano-imprint lithography, anti-bacterial properties, PLA

Procedia PDF Downloads 330
131 Multiscale Modelization of Multilayered Bi-Dimensional Soils

Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur

Abstract:

Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.

Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets

Procedia PDF Downloads 125
130 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals

Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen

Abstract:

Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.

Keywords: chalcogenides, defects, thermoluminescence, trap centers

Procedia PDF Downloads 282
129 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 361
128 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers

Authors: B. Neethu, Diptesh Das

Abstract:

The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.

Keywords: bridge, semi active control, sliding mode control, MR damper

Procedia PDF Downloads 124
127 Effectiveness of Imagery Compared with Exercise Training on Hip Abductor Strength and EMG Production in Healthy Adults

Authors: Majid Manawer Alenezi, Gavin Lawrence, Hans-Peter Kubis

Abstract:

Imagery training could be an important treatment for muscle function improvements in patients who are facing limitations in exercise training by pain or other adverse symptoms. However, recent studies are mostly limited to small muscle groups and are often contradictory. Moreover, a possible bilateral transfer effect of imagery training has not been examined. We, therefore, investigated the effectiveness of unilateral imagery training in comparison with exercise training on hip abductor muscle strength and EMG. Additionally, both limbs were assessed to investigate bilateral transfer effects. Healthy individuals took part in an imagery or exercise training intervention for two weeks and were assesses pre and post training. Participants (n=30), after randomization into an imagery and an exercise group, trained 5 times a week under supervision with additional self-performed training on the weekends. The training consisted of performing, or to imagine, 5 maximal isometric hip abductor contractions (= one set), repeating the set 7 times. All measurements and trainings were performed laying on the side on a dynamometer table. The imagery script combined kinesthetic and visual imagery with internal perspective for producing imagined maximal hip abduction contractions. The exercise group performed the same number of tasks but performing the maximal hip abductor contractions. Maximal hip abduction strength and EMG amplitudes were measured of right and left limbs pre- and post-training period. Additionally, handgrip strength and right shoulder abduction (Strength and EMG) were measured. Using mixed model ANOVA (strength measures) and Wilcoxen-tests (EMGs), data revealed a significant increase in hip abductor strength production in the imagery group on the trained right limb (~6%). However, this was not reported for the exercise group. Additionally, the left hip abduction strength (not used for training) did not show a main effect in strength, however, there was a significant interaction of group and time revealing that the strength increased in the imagery group while it remained constant in the exercise group. EMG recordings supported the strength findings showing significant elevation of EMG amplitudes after imagery training on right and left side, while the exercise training group did not show any changes. Moreover, measures of handgrip strength and shoulder abduction showed no effects over time and no interactions in both groups. Experiments showed that imagery training is a suitable method for effectively increasing functional parameters of larger limb muscles (strength and EMG) which were enhanced on both sides (trained and untrained) confirming a bilateral transfer effect. Indeed, exercise training did not reveal any increases in the parameters above omitting functional improvements. The healthy individuals tested might not easily achieve benefits from exercise training within the time tested. However, it is evident that imagery training is effective in increasing the central motor command towards the muscles and that the effect seems to be segmental (no increase in handgrip strength and shoulder abduction parameters) and affects both sides (trained and untrained). In conclusion, imagery training was effective in functional improvements in limb muscles and produced a bilateral transfer on strength and EMG measures.

Keywords: imagery, exercise, physiotherapy, motor imagery

Procedia PDF Downloads 234
126 Contrastive Analysis of Parameters Registered in Training Rowers and the Impact on the Olympic Performance

Authors: Gheorghe Braniste

Abstract:

The management of the training process in sports is closely related to the awareness of the close connection between performance and the morphological, functional and psychological characteristics of the athlete's body. Achieving high results in Olympic sports is influenced, on the one hand, by the genetically determined characteristics of the body and, on the other hand, by the morphological, functional and motor abilities of the athlete. Taking into account the importance of properly understanding the evolutionary specificity of athletes to assess their competitive potential, this study provides a comparative analysis of the parameters that characterize the growth and development of the level of adaptation of sweeping rowers, considering the growth interval between 12 and 20 years. The study established that, in the multi-annual training process, the bodies of the targeted athletes register significant adaptive changes while analyzing parameters of the morphological, functional, psychomotor and sports-technical spheres. As a result of the influence of physical efforts, both specific and non-specific, there is an increase in the adaptability of the body, its transfer to a much higher level of functionality within the parameters, useful and economical adaptive reactions influenced by environmental factors, be they internal or external. The research was carried out for 7 years, on a group of 28 athletes, following their evolution and recording the specific parameters of each age stage. In order to determine the level of physical, morpho-functional, psychomotor development and technical training of rowers, the screening data were applied at the State University of Physical Education and Sports in the Republic of Moldova. During the research, measurements were made on the waist, in the standing and sitting position, arm span, weight, circumference and chest perimeter, vital capacity of the lungs, with the subsequent determination of the vital index (tolerance level to oxygen deficiency in venous blood in Stange and Genchi breath-taking tests that characterize the level of oxygen saturation, absolute and relative strength of the hand and back, calculation of body mass and morphological maturity indices (Kettle index), body surface area (body gait), psychomotor tests (Romberg test), test-tepping 10 s., reaction to a moving object, visual and auditory-motor reaction, recording of technical parameters of rowing on a competitive distance of 200 m. At the end of the study it was found that highly performance is sports is to be associated on the one hand with the genetically determined characteristics of the body and, on the other hand, with favorable adaptive reactions and energy saving, as well as morphofunctional changes influenced by internal and external environmental factors. The importance of the results obtained at the end of the study was positively reflected in obtaining the maximum level of training of athletes in order to demonstrate performance in large-scale competitions and mostly in the Olympic Games.

Keywords: olympics, parameters, performance, peak

Procedia PDF Downloads 123
125 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 278
124 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 113
123 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 105
122 Inclusion Body Refolding at High Concentration for Large-Scale Applications

Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening

Abstract:

High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.

Keywords: dialysis, inclusion body, refolding, solubilization

Procedia PDF Downloads 294
121 The Use of Vasopressin in the Management of Severe Traumatic Brain Injury: A Narrative Review

Authors: Nicole Selvi Hill, Archchana Radhakrishnan

Abstract:

Introduction: Traumatic brain injury (TBI) is a leading cause of mortality among trauma patients. In the management of TBI, the main principle is avoiding cerebral ischemia, as this is a strong determiner of neurological outcomes. The use of vasoactive drugs, such as vasopressin, has an important role in maintaining cerebral perfusion pressure to prevent secondary brain injury. Current guidelines do not suggest a preferred vasoactive drug to administer in the management of TBI, and there is a paucity of information on the therapeutic potential of vasopressin following TBI. Vasopressin is also an endogenous anti-diuretic hormone (AVP), and pathways mediated by AVP play a large role in the underlying pathological processes of TBI. This creates an overlap of discussion regarding the therapeutic potential of vasopressin following TBI. Currently, its popularity lies in vasodilatory and cardiogenic shock in the intensive care setting, with increasing support for its use in haemorrhagic and septic shock. Methodology: This is a review article based on a literature review. An electronic search was conducted via PubMed, Cochrane, EMBASE, and Google Scholar. The aim was to identify clinical studies looking at the therapeutic administration of vasopressin in severe traumatic brain injury. The primary aim was to look at the neurological outcome of patients. The secondary aim was to look at surrogate markers of cerebral perfusion measurements, such as cerebral perfusion pressure, cerebral oxygenation, and cerebral blood flow. Results: Eight papers were included in the final number. Three were animal studies; five were human studies, comprised of three case reports, one retrospective review of data, and one randomised control trial. All animal studies demonstrated the benefits of vasopressors in TBI management. One animal study showed the superiority of vasopressin in reducing intracranial pressure and increasing cerebral oxygenation over a catecholaminergic vasopressor, phenylephrine. All three human case reports were supportive of vasopressin as a rescue therapy in catecholaminergic-resistant hypotension. The retrospective review found vasopressin did not increase cerebral oedema in TBI patients compared to catecholaminergic vasopressors; and demonstrated a significant reduction in the requirements of hyperosmolar therapy in patients that received vasopressin. The randomised control trial results showed no significant differences in primary and secondary outcomes between TBI patients receiving vasopressin versus those receiving catecholaminergic vasopressors. Apart from the randomised control trial, the studies included are of low-level evidence. Conclusion: Studies favour vasopressin within certain parameters of cerebral function compared to control groups. However, the neurological outcomes of patient groups are not known, and animal study results are difficult to extrapolate to humans. It cannot be said with certainty whether vasopressin’s benefits stand above usage of other vasoactive drugs due to the weaknesses of the evidence. Further randomised control trials, which are larger, standardised, and rigorous, are required to improve knowledge in this field.

Keywords: catecholamines, cerebral perfusion pressure, traumatic brain injury, vasopressin, vasopressors

Procedia PDF Downloads 67
120 Photoluminescence of Barium and Lithium Silicate Glasses and Glass Ceramics Doped with Rare Earth Ions

Authors: Augustas Vaitkevicius, Mikhail Korjik, Eugene Tretyak, Ekaterina Trusova, Gintautas Tamulaitis

Abstract:

Silicate materials are widely used as luminescent materials in amorphous and crystalline phase. Lithium silicate glass is popular for making neutron sensitive scintillation glasses. Cerium-doped single crystalline silicates of rare earth elements and yttrium have been demonstrated to be good scintillation materials. Due to their high thermal and photo-stability, silicate glass ceramics are supposed to be suitable materials for producing light converters for high power white light emitting diodes. In this report, the influence of glass composition and crystallization on photoluminescence (PL) of different silicate glasses was studied. Barium (BaO-2SiO₂) and lithium (Li₂O-2SiO₂) glasses were under study. Cerium, dysprosium, erbium and europium ions as well as their combinations were used for doping. The influence of crystallization was studied after transforming the doped glasses into glass ceramics by heat treatment in the temperature range of 550-850 degrees Celsius for 1 hour. The study was carried out by comparing the photoluminescence (PL) spectra, spatial distributions of PL parameters and quantum efficiency in the samples under study. The PL spectra and spatial distributions of their parameters were obtained by using confocal PL microscopy. A WITec Alpha300 S confocal microscope coupled with an air cooled CCD camera was used. A CW laser diode emitting at 405 nm was exploited for excitation. The spatial resolution was in sub-micrometer domain in plane and ~1 micrometer perpendicularly to the sample surface. An integrating sphere with a xenon lamp coupled with a monochromator was used to measure the external quantum efficiency. All measurements were performed at room temperature. Chromatic properties of the light emission from the glasses and glass ceramics have been evaluated. We observed that the quantum efficiency of the glass ceramics is higher than that of the corresponding glass. The investigation of spatial distributions of PL parameters revealed that heat treatment of the glasses leads to a decrease in sample homogeneity. In the case of BaO-2SiO₂: Eu, 10 micrometer long needle-like objects are formed, when transforming the glass into glass ceramics. The comparison of PL spectra from within and outside the needle-like structure reveals that the ratio between intensities of PL bands associated with Eu²⁺ and Eu³⁺ ions is larger in the bright needle-like structures. This indicates a higher degree of crystallinity in the needle-like objects. We observed that the spectral positions of the PL bands are the same in the background and the needle-like areas, indicating that heat treatment imposes no significant change to the valence state of the europium ions. The evaluation of chromatic properties confirms applicability of the glasses under study for fabrication of white light sources with high thermal stability. The ability to combine barium and lithium glass matrixes and doping by Eu, Ce, Dy, and Tb enables optimization of chromatic properties.

Keywords: glass ceramics, luminescence, phosphor, silicate

Procedia PDF Downloads 317
119 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 58
118 Thermal Properties and Water Vapor Permeability for Cellulose-Based Materials

Authors: Stanislavs Gendelis, Maris Sinka, Andris Jakovics

Abstract:

Insulation materials made from natural sources have become more popular for the ecologisation of buildings, meaning wide use of such renewable materials. Such natural materials replace synthetic products which consume a large quantity of energy. The most common and the cheapest natural materials in Latvia are cellulose-based (wood and agricultural plants). The ecological aspects of such materials are well known, but experimental data about physical properties remains lacking. In this study, six different samples of wood wool panels and a mixture of hemp shives and lime (hempcrete) are analysed. Thermal conductivity and heat capacity measurements were carried out for wood wool and cement panels using the calibrated hot plate device. Water vapor permeability was tested for hempcrete material by using the gravimetric dry cup method. Studied wood wool panels are eco-friendly and harmless material, which is widely used in the interior design of public and residential buildings, where noise absorption and sound insulation is of importance. They are also suitable for high humidity facilities (e.g., swimming pools). The difference in panels was the width of used wood wool, which is linked to their density. The results of measured thermal conductivity are in a wide range, showing the worsening of properties with the increasing of the wool width (for the least dense 0.066, for the densest 0.091 W/(m·K)). Comparison with mineral insulation materials shows that thermal conductivity for such materials are 2-3 times higher and are comparable to plywood and fibreboard. Measured heat capacity was in a narrower range; here, the dependence on the wool width was not so strong due to the fact that heat capacity value is related to mass, not volume. The resulting heat capacity is a combination of two main components. A comparison of results for different panels allows to select the most suitable sample for a specific application because the dependencies of the thermal insulation and heat capacity properties on the wool width are not the same. Hempcrete is a much denser material compared to conventional thermal insulating materials. Therefore, its use helps to reinforce the structural capacity of the constructional framework, at the same time, it is lightweight. By altering the proportions of the ingredients, hempcrete can be produced as a structural, thermal, or moisture absorbent component. The water absorption and water vapor permeability are the most important properties of these materials. Information about absorption can be found in the literature, but there are no data about water vapor transmission properties. Water vapor permeability was tested for a sample of locally made hempcrete using different air humidity values to evaluate the possible difference. The results show only the slight influence of the air humidity on the water vapor permeability value. The absolute ‘sd value’ measured is similar to mineral wool and wood fiberboard, meaning that due to very low resistance, water vapor passes easily through the material. At the same time, other properties – structural and thermal of the hempcrete is totally different. As a result, an experimentally-based knowledge of thermal and water vapor transmission properties for cellulose-based materials was significantly improved.

Keywords: heat capacity, hemp concrete, thermal conductivity, water vapor transmission, wood wool

Procedia PDF Downloads 221
117 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 73
116 Archaeoseismological Evidence for a Possible Destructive Earthquake in the 7th Century AD at the Ancient Sites of Bulla Regia and Chemtou (NW Tunisia): Seismotectonic and Structural Implications

Authors: Abdelkader Soumaya, Noureddine Ben Ayed, Ali Kadri, Said Maouche, Hayet Khayati Ammar, Ahmed Braham

Abstract:

The historic sites of Bulla Regia and Chemtou are among the most important archaeological monuments in northwestern Tunisia, which flourished as large, wealthy settlements during the Roman and Byzantine periods (2nd to 7th centuries AD). An archaeoseismological study provides the first indications about the impact of a possible ancient strong earthquake in the destruction of these cities. Based on previous archaeological excavation results, including numismatic evidence, pottery, economic meltdown and urban transformation, the abrupt ruin and destruction of the cities of Bulla Regia and Chemtou can be bracketed between 613 and 647 AD. In this study, we carried out the first attempt to use the analysis of earthquake archaeological effects (EAEs) that were observed during our field investigations in these two historic cities. The damage includes different types of EAEs: folds on regular pavements, displaced and deformed vaults, folded walls, tilted walls, collapsed keystones in arches, dipping broken corners, displaced-fallen columns, block extrusions in walls, penetrative fractures in brick-made walls and open fractures on regular pavements. These deformations are spread over 10 different sectors or buildings and include 56 measured EAEs. The structural analysis of the identified EAEs can indicate an ancient destructive earthquake that probably destroyed the Bulla Regia and Chemtou archaeological sites. We then analyzed these measurements using structural geological analysis to obtain the maximum horizontal strain of the ground (e.g., S ₕₘₐₓ) on each building-oriented damage. After the collection and analysis of these strain datasets, we proceed to plot the orientation of Sₕₘₐₓ trajectories on the map of the archaeological site (Bulla Regia). We concluded that the obtained Sₕₘₐₓ trajectories within this site could then be related to the mean direction of ground motion (oscillatory movement of the ground) triggered by a seismic event, as documented for some historical earthquakes across the world. These Sₕₘₐₓ orientations closely match the current active stress field, as highlighted by some instrumental events in northern Tunisia. In terms of the seismic source, we strongly suggest that the reactivation of a neotectonic strike-slip fault trending N50E must be responsible for this probable historic earthquake and the recent instrumental seismicity in this area. This fault segment, affecting the folded quaternary deposits south of Jebel Rebia, passes through the monument of Bulla Regia. Stress inversion of the observed and measured data along this fault shows an N150 - 160 trend of Sₕₘₐₓ under a transpressional tectonic regime, which is quite consistent with the GPS data and the state of the current stress field in this region.

Keywords: NW Tunisia, archaeoseismology, earthquake archaeological effect, bulla regia - Chemtou, seismotectonic, neotectonic fault

Procedia PDF Downloads 49
115 An Indispensable Parameter in Lipid Ratios to Discriminate between Morbid Obesity and Metabolic Syndrome in Children: High Density Lipoprotein Cholesterol

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity is a low-grade inflammatory disease and may lead to health problems such as hypertension, dyslipidemia, diabetes. It is also associated with important risk factors for cardiovascular diseases. This requires the detailed evaluation of obesity, particularly in children. The aim of this study is to enlighten the potential associations between lipid ratios and obesity indices and to introduce those with discriminating features among children with obesity and metabolic syndrome (MetS). A total of 408 children (aged between six and eighteen years) participated in the scope of the study. Informed consent forms were taken from the participants and their parents. Ethical Committee approval was obtained. Anthropometric measurements such as weight, height as well as waist, hip, head, neck circumferences and body fat mass were taken. Systolic and diastolic blood pressure values were recorded. Body mass index (BMI), diagnostic obesity notation model assessment index-II (D2 index), waist-to-hip, head-to-neck ratios were calculated. Total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDLChol), low-density lipoprotein cholesterol (LDLChol) analyses were performed in blood samples drawn from 110 children with normal body weight, 164 morbid obese (MO) children and 134 children with MetS. Age- and sex-adjusted BMI percentiles tabulated by World Health Organization were used to classify groups; normal body weight, MO and MetS. 15th-to-85th percentiles were used to define normal body weight children. Children, whose values were above the 99th percentile, were described as MO. MetS criteria were defined. Data were evaluated statistically by SPSS Version 20. The degree of statistical significance was accepted as p≤0.05. Mean±standard deviation values of BMI for normal body weight children, MO children and those with MetS were 15.7±1.1, 27.1±3.8 and 29.1±5.3 kg/m2, respectively. Corresponding values for the D2 index were calculated as 3.4±0.9, 14.3±4.9 and 16.4±6.7. Both BMI and D2 index were capable of discriminating the groups from one another (p≤0.01). As far as other obesity indices were considered, waist-to hip and head-to-neck ratios did not exhibit any statistically significant difference between MO and MetS groups (p≥0.05). Diagnostic obesity notation model assessment index-II was correlated with the triglycerides-to-HDL-C ratio in normal body weight and MO (r=0.413, p≤0.01 and r=0.261, (p≤0.05, respectively). Total cholesterol-to-HDL-C and LDL-C-to-HDL-C showed statistically significant differences between normal body weight and MO as well as MO and MetS (p≤0.05). The only group in which these two ratios were significantly correlated with waist-to-hip ratio was MetS group (r=0.332 and r=0.334, p≤0.01, respectively). Lack of correlation between the D2 index and the triglycerides-to-HDL-C ratio was another important finding in MetS group. In this study, parameters and ratios, whose associations were defined previously with increased cardiovascular risk or cardiac death have been evaluated along with obesity indices in children with morbid obesity and MetS. Their profiles during childhood have been investigated. Aside from the nature of the correlation between the D2 index and triglycerides-to-HDL-C ratio, total cholesterol-to-HDL-C as well as LDL-C-to- HDL-C ratios along with their correlations with waist-to-hip ratio showed that the combination of obesity-related parameters predicts better than one parameter and appears to be helpful for discriminating MO children from MetS group.

Keywords: children, lipid ratios, metabolic syndrome, obesity indices

Procedia PDF Downloads 159
114 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 128
113 Alternative Energy and Carbon Source for Biosurfactant Production

Authors: Akram Abi, Mohammad Hossein Sarrafzadeh

Abstract:

Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.

Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin

Procedia PDF Downloads 301
112 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium

Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte

Abstract:

The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.

Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation

Procedia PDF Downloads 264
111 Investigation of the Trunk Inclination Positioning Angle on Swallowing and Respiratory Function

Authors: Hsin-Yi Kathy Cheng, Yan-Ying JU, Wann-Yun Shieh, Chin-Man Wang

Abstract:

Although the coordination of swallowing and respiration has been discussed widely, the influence of the positioning angle on swallowing and respiration during feeding has rarely been investigated. This study aimed to investigate the timing and coordination of swallowing and respiration in different seat inclination angles, with liquid and bolus, to provide suggestions and guidelines for the design and develop a feedback-controlled seat angle adjustment device for the back-adjustable wheelchair. Twenty-six participants aged between 15-30 years old without any signs of swallowing difficulty were included. The combination of seat inclinations and food types was randomly assigned, with three repetitions in each combination. The trunk inclination angle was adjusted by a commercialized positioning wheelchair. A total of 36 swallows were done, with at least 30 seconds of rest between each swallow. We used a self-developed wearable device to measure the submandibular muscle surface EMG, the movement of the thyroid cartilage, and the respiratory status of the nasal cavity. Our program auto-analyzed the onset and offset of duration, and the excursion and strength of thyroid cartilage when it was moving, coordination between breathing and swallowing were also included. Variables measured include the EMG duration (DsEMG), swallowing apnea duration (SAD), total excursion time (TET), duration of 2nd deflection, FSR amplitude, Onset latency, DsEMG onset, DsEMG offset, FSR onset, and FSR offset. These measurements were done in four-seat inclination angles (5。, 15。, 30。, 45。) and three food contents (1ml water, 10ml water, and 5ml pudding bolus) for each subject. The data collected between different contents were compared. Descriptive statistics were used to describe the basic features of the data. Repeated measure ANOVAs were used to analyze the differences for the dependent variables in different seat inclination and food content combinations. The results indicated significant differences in seat inclination, mostly between 5。 and 45。, in all variables except FSR amplitude. It also indicated significant differences in food contents almost among all variables. Significant interactions between seat inclination and food contents were only found in FSR offsets. The same protocol will be applied to participants with disabilities. The results of this study would serve as clinical guidance for proper feeding positions with different food contents. The ergonomic data would also provide references for assistive technology professionals and practitioners in device design and development. In summary, the current results indicated that it is easier for a subject to lean backward during swallowing than when sitting upright and swallowing water is easier than swallowing pudding. The results of this study would serve as the clinical guidance for proper feeding position (such as wheelchair back angle adjustment) with different food contents. The same protocol can be applied to elderly participants or participants with physical disabilities. The ergonomic data would also provide references for assistive technology professionals and practitioners in device design and development.

Keywords: swallowing, positioning, assistive device, disability

Procedia PDF Downloads 72
110 Association between Physical Inactivity and Sedentary Behaviours with Risk of Hypertension among Sedentary Occupation Workers: A Cross-Sectional Study

Authors: Hanan Badr, Fahad Manee, Rao Shashidhar, Omar Bayoumy

Abstract:

Introduction: Hypertension is the major risk factor for cardiovascular diseases and stroke and a universe leading cause of disability-adjusted life years and mortality. Adopting an unhealthy lifestyle is thought to be associated with developing hypertension regardless of predisposing genetic factors. This study aimed to examine the association between recreational physical activity (RPA), and sedentary behaviors with a risk of hypertension among ministry employees, where there is no role for occupational physical activity (PA), and to scrutinize participants’ time spent in RPA and sedentary behaviors on the working and weekend days. Methods: A cross-sectional study was conducted among randomly selected 2562 employees working at ten randomly selected ministries in Kuwait. To have a representative sample, the proportional allocation technique was used to define the number of participants in each ministry. A self-administered questionnaire was used to collect data about participants' socio-demographic characteristics, health status, and their 24 hours’ time use during a regular working day and a weekend day. The time use covered a list of 20 different activities practiced by a person daily. The New Zealand Physical Activity Questionnaire-Short Form (NZPAQ-SF) was used to assess the level of RPA. The scale generates three categories according to the number of hours spent in RPA/week: relatively inactive, relatively active, and highly active. Gender-matched trained nurses performed anthropometric measurements (weight and height) and measuring blood pressure (two readings) using an automatic blood pressure monitor (95% accuracy level compared to a calibrated mercury sphygmomanometer). Results: Participants’ mean age was 35.3±8.4 years, with almost equal gender distribution. About 13% of the participants were smokers, and 75% were overweight. Almost 10% reported doctor-diagnosed hypertension. Among those who did not, the mean systolic blood pressure was 119.9±14.2 and the mean diastolic blood pressure was 80.9±7.3. Moreover, 73.9% of participants were relatively physically inactive and 18% were highly active. Mean systolic and diastolic blood pressure showed a significant inverse association with the level of RPA (means of blood pressure measures were: 123.3/82.8 among relatively inactive, 119.7/80.4 among relatively active, and 116.6/79.6 among highly active). Furthermore, RPA occupied 1.6% and 1.8% of working and weekend days, respectively, while sedentary behaviors (watching TV, using electronics for social media or entertaining, etc.) occupied 11.2% and 13.1%, respectively. Sedentary behaviors were significantly associated with high levels of systolic and diastolic blood pressure. Binary logistic regression revealed that physical inactivity (OR=3.13, 95% CI: 2.25-4.35) and sedentary behaviors (OR=2.25, CI: 1.45-3.17) were independent risk factors for high systolic and diastolic blood pressure after adjustment for other covariates. Conclusions: Physical inactivity and sedentary lifestyle were associated with a high risk of hypertension. Further research to examine the independent role of RPA in improving blood pressure levels and cultural and occupational barriers for practicing RPA are recommended. Policies should be enacted in promoting PA in the workplace that might help in decreasing the risk of hypertension among sedentary occupation workers.

Keywords: physical activity, sedentary behaviors, hypertension, workplace

Procedia PDF Downloads 178
109 Assessing of Social Comfort of the Russian Population with Big Data

Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro

Abstract:

The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.

Keywords: big data, Google trends, integral indicator, social comfort

Procedia PDF Downloads 200
108 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 326
107 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions

Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak

Abstract:

Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.

Keywords: energy saving, lightweight construction, PCM, simulation

Procedia PDF Downloads 286
106 Global Supply Chain Tuning: Role of National Culture

Authors: Aleksandr S. Demin, Anastasiia V. Ivanova

Abstract:

Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.

Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management

Procedia PDF Downloads 106