Search results for: discriminant function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5091

Search results for: discriminant function

291 Pioneering Conservation of Aquatic Ecosystems under Australian Law

Authors: Gina M. Newton

Abstract:

Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.

Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species

Procedia PDF Downloads 133
290 Heavy Metals in the Water of Lakes in the 'Bory Tucholskie' National Park of Biosphere Reserve

Authors: Krzysztof Gwozdzinski, Janusz Mazur

Abstract:

Bory Tucholskie (Tucholskie Forest) is one of the largest pine forest complexes in Poland. It occupies approx. 3,000 square kilometers of Sandr in the Brda and Wda basin and the Tuchola Plain and the Charzykowskie Plain. Since 2010 it has transformed into The Bory Tucholskie Biosphere Reserve, according to the UNESCO decision. The area of the Bory Tucholskie National Park (BTNP), the park area, has been designated in 1996. There is little data on the presence of heavy metals in the Park's lakes. Concentration of heavy metals in the water of 19 lakes in the BTNP was examined. The lakes were divided into two groups: subglacial channel lakes of Struga Siedmiu Jezior (the Seven Lakes Stream) and other lakes. Heavy metals (transition metals) belong to d-block of elements. The part of these metals plays an important role in the function of living organisms as metalloproteins (enzymes, hemoproteins, vitamins, etc.). However, heavy metals are also typical; heavy metals are typical anthropogenic pollutants. Water samples were collected at the deepest points of lakes during spring and during summer stagnation. The analysis of metals was performed in an atomic absorption spectrophotometer Varian Spectra A300/400 in electric atomizer (GTA 96) in graphite cuvette. In the waters of the Seven Lakes Stream (Ostrowite, Zielone, Jelen, Belczak, Glowka, Plesno, Skrzynka, Mielnica) the increase in the concentration of the manganese and iron from outflow to inflow of Charzykowskie lake was found, while the concentration of copper (approx. 4 μg dm⁻³) and cadmium ( < 0.5 μg dm⁻³) was similar in all lakes. The concentration of the lead also varied within 2.1-3.6 μg dm⁻³. The concentration of nickel was approx. 3-fold higher in Ostrowite lake than other lakes of Struga. In turn the waters of the lakes Ostrowite, Jelen and Belczak were rich in zinc. The lowest level of heavy metals was observed in Zielone lake. In the second group of lakes, i.e., Krzywce Wielkie and Krzywce Male the heavy metal concentrations were lower than in the waters of Struga but higher than in oligotrophic lakes, i.e., Nierybno, Gluche, Kociol, Gacno Wielkie, Gacno Mae, Dlugie, Zabionek, and Sosnowek. The concentration of cadmium was below 0.5 μg dm⁻³ in all the studied lakes from this group. In the group of oligotrophic lakes the highest concentrations of metals such as manganese, iron, zinc and nickel in Gacno Male and Gacno Wielkie were observed. The high level of manganese in Sosnowek and Gacno Wielkie lakes was found. The lead level was also high in Nierybno lake and nickel in Gacno Wielkie lake. The lower level of heavy metals was in oligotrophic lakes such as Kociol, Dlugie, Zabionek and α-mesotrophic lake, Krzywce Wielkie. Generally, the level of heavy metals in studied lakes situated in Bory Tucholskie National Park was lower than in other lakes of Bory Tucholskie Biosphere Reserve.

Keywords: Bory Tucholskie Biosphere Reserve, Bory Tucholskie National Park, heavy metals, lakes

Procedia PDF Downloads 123
289 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 164
288 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids

Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh

Abstract:

The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.

Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction

Procedia PDF Downloads 140
287 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 92
286 Therapeutic Potential of GSTM2-2 C-Terminal Domain and Its Mutants, F157A and Y160A on the Treatment of Cardiac Arrhythmias: Effect on Ca2+ Transients in Neonatal Ventricular Cardiomyocytes

Authors: R. P. Hewawasam, A. F. Dulhunty

Abstract:

The ryanodine receptor (RyR) is an intracellular ion channel that releases Ca2+ from the sarcoplasmic reticulum and is essential for the excitation-contraction coupling and contraction in striated muscle. Human muscle specific glutathione transferase M2-2 (GSTM2-2) is a highly specific inhibitor of cardiac ryanodine receptor (RyR2) activity. Single channel-lipid bilayer studies and Ca2+ release assays performed using the C-terminal half of the GSTM2-2 and its mutants F157A and Y160A confirmed the ability of the C terminal domain of GSTM2-2 to specifically inhibit the cardiac ryanodine receptor activity. Objective of the present study is to determine the effect of C terminal domain of GSTM2-2 (GSTM2-2C) and the mutants, F157A and Y160A on the Ca2+ transients of neonatal ventricular cardiomyocytes. Primary cardiomyocytes were cultured from neonatal rats. They were treated with GSTM2-2C and the two mutants F157A and Y160A at 15µM and incubated for 2 hours. Then the cells were led with Fluo-4AM, fluorescent Ca2+ indicator, and the field stimulated (1 Hz, 3V and 2ms) cells were excited using the 488 nm argon laser. Contractility of the cells were measured and the Ca2+ transients in the stained cells were imaged using Leica SP5 confocal microscope. Peak amplitude of the Ca2+ transient, rise time and decay time from the peak were measured for each transient. In contrast to GSTM2C which significantly reduced the % shortening (42.8%) in the field stimulated cells, F157A and Y160A failed to reduce the % shortening.Analysis revealed that the average amplitude of the Ca2+ transient was significantly reduced (P<0.001) in cells treated with the wild type GSTM2-2C compared to that of untreated cells. Cells treated with the mutants F157A and Y160A didn’t change the Ca2+ transient significantly compared to the control. A significant increase in the rise time (P< 0.001) and a significant reduction in the decay time (P< 0.001) were observed in cardiomyocytes treated with GSTM2-2C compared to the control but not with F157A and Y160A. These results are consistent with the observation that GSTM2-2C reduced the Ca2+ release from the cardiac SR significantly whereas the mutants, F157A and Y160A didn’t show any effect compared to the control. GSTM2-2C has an isoform-specific effect on the cardiac ryanodine receptor activity and also it inhibits RyR2 channel activity only during diastole. Selective inhibition of RyR2 by GSTM2-2C has significant clinical potential in the treatment of cardiac arrhythmias and heart failure. Since GSTM2-2C-terminal construct has no GST enzyme activity, its introduction to the cardiomyocyte would not exert any unwanted side effects that may alter its enzymatic action. The present study further confirms that GSTM2-2C is capable of decreasing the Ca2+ release from the cardiac SR during diastole. These results raise the future possibility of using GSTM2-2C as a template for therapeutics that can depress RyR2 function when the channel is hyperactive in cardiac arrhythmias and heart failure.

Keywords: arrhythmia, cardiac muscle, cardiac ryanodine receptor, GSTM2-2

Procedia PDF Downloads 284
285 The Different Effects of Mindfulness-Based Relapse Prevention Group Therapy on QEEG Measures in Various Severity Substance Use Disorder Involuntary Clients

Authors: Yu-Chi Liao, Nai-Wen Guo, Chun‑Hung Lee, Yung-Chin Lu, Cheng-Hung Ko

Abstract:

Objective: The incidence of behavioral addictions, especially substance use disorders (SUDs), is gradually be taken seriously with various physical health problems. Mindfulness-based relapse prevention (MBRP) is a treatment option for promoting long-term health behavior change in recent years. MBRP is a structured protocol that integrates formal meditation practices with the cognitive-behavioral approach of relapse prevention treatment by teaching participants not to engage in reappraisal or savoring techniques. However, considering SUDs as a complex brain disease, questionnaires and symptom evaluation are not sufficient to evaluate the effect of MBRP. Neurophysiological biomarkers such as quantitative electroencephalogram (QEEG) may improve accurately represent the curative effects. This study attempted to find out the neurophysiological indicator of MBRP in various severity SUD involuntary clients. Participants and Methods: Thirteen participants (all males) completed 8-week mindfulness-based treatment provided by trained, licensed clinical psychologists. The behavioral data were from the Severity of Dependence Scale (SDS) and Negative Mood Regulation Scale (NMR) before and afterMBRP treatment. The QEEG data were simultaneously recorded with executive attention tasks, called comprehensive nonverbal attention test(CNAT). The two-way repeated-measures (treatment * severity) ANOVA and independent t-test were used for statistical analysis. Results: Thirteen participants regrouped into high substance dependence (HS) and low substance dependence (LS) by SDS cut-off. The HS group showed more SDS total score and lower gamma wave in the Go/No Go task of CNAT at pretest. Both groups showed the main effect that they had a lower frontal theta/beta ratio (TBR) during the simple reaction time task of CNAT. The main effect showed that the delay errors of CNAT were lower after MBRP. There was no other difference in CNAT between groups. However, after MBRP, compared to LS, the HS group have resonant progress in improving SDS and NMR scores. The neurophysiological index, the frontal TBR of the HS during the Go/No Go task of CNATdecreased than that of the LS group. Otherwise, the LS group’s gamma wave was a significant reduction on the Go/No Go task of CNAT. Conclusion: The QEEG data supports the MBRP can restore the prefrontal function of involuntary addicts and lower their errors in executive attention tasks. However, the improvement of MBRPfor the addict with high addiction severity is significantly more than that with low severity, including QEEG’s indicators and negative emotion regulation. Future directions include investigating the reasons for differences in efficacy among different severity of the addiction.

Keywords: mindfulness, involuntary clients, QEEG, emotion regulation

Procedia PDF Downloads 147
284 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact

Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze

Abstract:

Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.

Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric

Procedia PDF Downloads 171
283 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition

Authors: Tamir Michal

Abstract:

The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.

Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development

Procedia PDF Downloads 353
282 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 216
281 Monitoring Future Climate Changes Pattern over Major Cities in Ghana Using Coupled Modeled Intercomparison Project Phase 5, Support Vector Machine, and Random Forest Modeling

Authors: Stephen Dankwa, Zheng Wenfeng, Xiaolu Li

Abstract:

Climate change is recently gaining the attention of many countries across the world. Climate change, which is also known as global warming, referring to the increasing in average surface temperature has been a concern to the Environmental Protection Agency of Ghana. Recently, Ghana has become vulnerable to the effect of the climate change as a result of the dependence of the majority of the population on agriculture. The clearing down of trees to grow crops and burning of charcoal in the country has been a contributing factor to the rise in temperature nowadays in the country as a result of releasing of carbon dioxide and greenhouse gases into the air. Recently, petroleum stations across the cities have been on fire due to this climate changes and which have position Ghana in a way not able to withstand this climate event. As a result, the significant of this research paper is to project how the rise in the average surface temperature will be like at the end of the mid-21st century when agriculture and deforestation are allowed to continue for some time in the country. This study uses the Coupled Modeled Intercomparison Project phase 5 (CMIP5) experiment RCP 8.5 model output data to monitor the future climate changes from 2041-2050, at the end of the mid-21st century over the ten (10) major cities (Accra, Bolgatanga, Cape Coast, Koforidua, Kumasi, Sekondi-Takoradi, Sunyani, Ho, Tamale, Wa) in Ghana. In the models, Support Vector Machine and Random forest, where the cities as a function of heat wave metrics (minimum temperature, maximum temperature, mean temperature, heat wave duration and number of heat waves) assisted to provide more than 50% accuracy to predict and monitor the pattern of the surface air temperature. The findings identified were that the near-surface air temperature will rise between 1°C-2°C (degrees Celsius) over the coastal cities (Accra, Cape Coast, Sekondi-Takoradi). The temperature over Kumasi, Ho and Sunyani by the end of 2050 will rise by 1°C. In Koforidua, it will rise between 1°C-2°C. The temperature will rise in Bolgatanga, Tamale and Wa by 0.5°C by 2050. This indicates how the coastal and the southern part of the country are becoming hotter compared with the north, even though the northern part is the hottest. During heat waves from 2041-2050, Bolgatanga, Tamale, and Wa will experience the highest mean daily air temperature between 34°C-36°C. Kumasi, Koforidua, and Sunyani will experience about 34°C. The coastal cities (Accra, Cape Coast, Sekondi-Takoradi) will experience below 32°C. Even though, the coastal cities will experience the lowest mean temperature, they will have the highest number of heat waves about 62. Majority of the heat waves will last between 2 to 10 days with the maximum 30 days. The surface temperature will continue to rise by the end of the mid-21st century (2041-2050) over the major cities in Ghana and so needs to be addressed to the Environmental Protection Agency in Ghana in order to mitigate this problem.

Keywords: climate changes, CMIP5, Ghana, heat waves, random forest, SVM

Procedia PDF Downloads 201
280 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs

Authors: John Farr Rothrock

Abstract:

Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.

Keywords: migraine, female, libido, sexual activity, phenotype

Procedia PDF Downloads 77
279 Physicochemical-Mechanical, Thermal and Rheological Properties Analysis of Pili Tree (Canarium Ovatum) Resin as Aircraft Integral Fuel Tank Sealant

Authors: Mark Kennedy, E. Bantugon, Noruane A. Daileg

Abstract:

Leaks arising from aircraft fuel tanks is a protracted problem for the aircraft manufacturers, operators, and maintenance crews. It principally arises from stress, structural defects, or degraded sealants as the aircraft age. It can be ignited by different sources, which can result in catastrophic flight and consequences, exhibiting a major drain both on time and budget. In order to mitigate and eliminate this kind of problem, the researcher produced an experimental sealant having a base material of natural tree resin, the Pili Tree Resin. Aside from producing an experimental sealant, the main objective of this research is to analyze its physical, chemical, mechanical, thermal, and rheological properties, which is beneficial and effective for specific aircraft parts, particularly the integral fuel tank. The experimental method of research was utilized in this study since it is a product invention. This study comprises two parts, specifically the Optimization Process and the Characterization Process. In the Optimization Process, the experimental sealant was subjected to the Flammability Test, an important test and consideration according to 14 Code of Federal Regulation Appendix N, Part 25 - Fuel Tank Flammability Exposure and Reliability Analysis, to get the most suitable formulation. Followed by the Characterization Process, where the formulated experimental sealant has undergone thirty-eight (38) different standard testing including Organoleptic, Instrumental Color Measurement Test, Smoothness of Appearance Test, Miscibility Test, Boiling Point Test, Flash Point Test, Curing Time, Adhesive Test, Toxicity Test, Shore A Hardness Test, Compressive Strength, Shear Strength, Static Bending Strength, Tensile Strength, Peel Strength Test, Knife Test, Adhesion by Tape Test, Leakage Test), Drip Test, Thermogravimetry-Differential Thermal Analysis (TG-DTA), Differential Scanning Calorimetry, Calorific Value, Viscosity Test, Creep Test, and Anti-Sag Resistance Test to determine and analyze the five (5) material properties of the sealant. The numerical values of the mentioned tests are determined using product application, testing, and calculation. These values are then used to calculate the efficiency of the experimental sealant. Accordingly, this efficiency is the means of comparison between the experimental and commercial sealant. Based on the results of the different standard testing conducted, the experimental sealant exceeded all the data results of the commercial sealant. This result shows that the physicochemical-mechanical, thermal, and rheological properties of the experimental sealant are far more effective as an aircraft integral fuel tank sealant alternative in comparison to the commercial sealant. Therefore, Pili Tree possesses a new role and function: a source of ingredients in sealant production.

Keywords: Aircraft Integral Fuel Tank, Physicochemi-mechanical, Pili Tree Resin, Properties, Rheological, Sealant, Thermal

Procedia PDF Downloads 298
278 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 147
277 Patterns and Predictors of Intended Service Use among Frail Older Adults in Urban China

Authors: Yuanyuan Fu

Abstract:

Background and Purpose: Along with the change of society and economy, the traditional home function of old people has gradually weakened in the contemporary China. Acknowledging these situations, to better meet old people’s needs on formal services and improve the quality of later life, this study seeks to identify patterns of intended service use among frail old people living in the communities and examined determinants that explain heterogeneous variations in old people’s intended service use patterns. Additionally, this study also tested the relationship between culture value and intended service use patterns and the mediating role of enabling factors in terms of culture value and intended service use patterns. Methods:Participants were recruited from Haidian District, Beijing, China in 2015. The multi-stage sampling method was adopted to select sub-districts, communities and old people aged 70 years old or older. After screening, 577 old people with limitations in daily life, were successfully interviewed. After data cleaning, 550 samples were included for data analysis. This study establishes a conceptual framework based on the Anderson Model (including predisposing factors, enabling factors and need factors), and further developed it by adding culture value factors (including attitudes towards filial piety and attitudes towards social face). Using a latent class analysis (LCA), this study classifies overall patterns of old people’s formal service utilization. Fourteen types of formal services were taken into account, including housework, voluntary support, transportation, home-delivered meals, and home-delivery medical care, elderly’s canteen and day-care center/respite care and so on. Structural equation modeling (SEM) was used to examine the direct effect of culture value on service use pattern, and the mediating effect of the enabling factors. Results: The LCA classified a hierarchical structure of service use patterns: multiple intended service use (N=69, 23%), selective intended service use (N=129, 23%), and light intended service use (N=352, 64%). Through SEM, after controlling predisposing factors and need factors, the results showed the significant direct effect of culture value on older people’s intended service use patterns. Enabling factors had a partial mediation effect on the relationship between culture value and the patterns. Conclusions and Implications: Differentiation of formal services may be important for meeting frail old people’s service needs and distributing program resources by identifying target populations for intervention, which may make reference to specific interventions to better support frail old people. Additionally, culture value had a unique direct effect on the intended service use patterns of frail old people in China, enriching our theoretical understanding of sources of culture value and their impacts. The findings also highlighted the mediation effects of enabling factors on the relationship between culture value factors and intended service use patterns. This study suggests that researchers and service providers should pay more attention to the important role of culture value factors in contributing to intended service use patterns and also be more sensitive to the mediating effect of enabling factors when discussing the relationship between culture value and the patterns.

Keywords: frail old people, intended service use pattern, culture value, enabling factors, contemporary China, latent class analysis

Procedia PDF Downloads 226
276 Sugar-Induced Stabilization Effect of Protein Structure

Authors: Mitsuhiro Hirai, Satoshi Ajito, Nobutaka Shimizu, Noriyuki Igarashi, Hiroki Iwase, Shinichi Takata

Abstract:

Sugars and polyols are known to be bioprotectants preventing such as protein denaturation and enzyme deactivation and widely used as a nontoxic additive in various industrial and medical products. The mechanism of their protective actions has been explained by specific bindings between biological components and additives, changes in solvent viscosities, and surface tension and free energy changes upon transfer of those components into additive solutions. On the other hand, some organisms having tolerances against extreme environment produce stress proteins and/or accumulate sugars in cells, which is called cryptobiosis. In particular, trehalose has been drawing attention relevant to cryptobiosis under external stress such as high or low temperature, drying, osmotic pressure, and so on. The function of cryptobiosis by trehalose has been explained relevant to the restriction of the intra-and/or-inter-molecular movement by vitrification or from the replacement of water molecule by trehalose. Previous results suggest that the structure and interaction between sugar and water are a key determinant for understanding cryptobiosis. Recently, we have shown direct evidence that the protein hydration (solvation) and structural stability against chemical and thermal denaturation significantly depend on sugar species and glycerol. Sugar and glycerol molecules tend to be preferentially or weakly excluded from the protein surface and preserved the native protein hydration shell. Due to the protective action of the protein hydration shell by those molecules, the protein structure is stabilized against chemical (guanidinium chloride) and thermal denaturation. The protective action depends on sugar species. To understand the above trend and difference in detail, it is essentially important to clarify the characteristics of solutions containing those additives. In this study, by using wide-angle X-ray scattering technique covering a wide spatial region (~3-120 Å), we have clarified structures of sugar solutions with the concentration from 5% w/w to 65% w/w. The sugars measured in the present study were monosaccharides (glucose, fructose, mannose) and disaccharides (sucrose, trehalose, maltose). Due to observed scattering data with a wide spatial resolution, we have succeeded in obtaining information on the internal structure of individual sugar molecules and on the correlation between them. Every sugar gradually shortened the average inter-molecular distance as the concentration increased. The inter-molecular interaction between sugar molecules was essentially showed an exclusive tendency for every sugar, which appeared as the presence of a repulsive correlation hole. This trend was more weakly seen for trehalose compared to other sugars. The intermolecular distance and spread of individual molecule clearly showed the dependence of sugar species. We will discuss the relation between the characteristic of sugar solution and its protective action of biological materials.

Keywords: hydration, protein, sugar, X-ray scattering

Procedia PDF Downloads 156
275 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 72
274 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home

Authors: Laura M. F. Bertens

Abstract:

The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.

Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace

Procedia PDF Downloads 159
273 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation

Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako

Abstract:

The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.

Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential

Procedia PDF Downloads 100
272 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 264
271 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences

Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur

Abstract:

In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.

Keywords: decision making, endowment effect, pricing, loss aversion, loss attention

Procedia PDF Downloads 347
270 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study

Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi

Abstract:

Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.

Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine

Procedia PDF Downloads 115
269 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication

Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons

Abstract:

The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.

Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture

Procedia PDF Downloads 168
268 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 138
267 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence

Authors: Yohanna Panshak

Abstract:

The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.

Keywords: oil-price, volatility, prosperity, budget, expenditure

Procedia PDF Downloads 273
266 Cockpit Integration and Piloted Assessment of an Upset Detection and Recovery System

Authors: Hafid Smaili, Wilfred Rouwhorst, Paul Frost

Abstract:

The trend of recent accident and incident cases worldwide show that the state-of-the-art automation and operations, for current and future demanding operational environments, does not provide the desired level of operational safety under crew peak workload conditions, specifically in complex situations such as loss-of-control in-flight (LOC-I). Today, the short term focus is on preparing crews to recognise and handle LOC-I situations through upset recovery training. This paper describes the cockpit integration aspects and piloted assessment of both a manually assisted and automatic upset detection and recovery system that has been developed and demonstrated within the European Advanced Cockpit for Reduction Of StreSs and workload (ACROSS) programme. The proposed system is a function that continuously monitors and intervenes when the aircraft enters an upset and provides either manually pilot-assisted guidance or takes over full control of the aircraft to recover from an upset. In order to mitigate the highly physical and psychological impact during aircraft upset events, the system provides new cockpit functionalities to support the pilot in recovering from any upset both manually assisted and automatically. A piloted simulator assessment was made in Oct-Nov 2015 using ten pilots in a representative civil large transport fly-by-wire aircraft in terms of the preference of the tested upset detection and recovery system configurations to reduce pilot workload, increase situational awareness and safe interaction with the manually assisted or automated modes. The piloted simulator evaluation of the upset detection and recovery system showed that the functionalities of the system are able to support pilots during an upset. The experiment showed that pilots are willing to rely on the guidance provided by the system during an upset. Thereby, it is important for pilots to see and understand what the aircraft is doing and trying to do especially in automatic modes. Comparing the manually assisted and the automatic recovery modes, the pilot’s opinion was that an automatic recovery reduces the workload so that they could perform a proper screening of the primary flight display. The results further show that the manually assisted recoveries, with recovery guidance cues on the cockpit primary flight display, reduced workload for severe upsets compared to today’s situation. The level of situation awareness was improved for automatic upset recoveries where the pilot could monitor what the system was trying to accomplish compared to automatic recovery modes without any guidance. An improvement in situation awareness was also noticeable with the manually assisted upset recovery functionalities as compared to the current non-assisted recovery procedures. This study shows that automatic upset detection and recovery functionalities are likely to positively impact the operational safety by means of reduced workload, improved situation awareness and crew stress reduction. It is thus believed that future developments for upset recovery guidance and loss-of-control prevention should focus on automatic recovery solutions.

Keywords: aircraft accidents, automatic flight control, loss-of-control, upset recovery

Procedia PDF Downloads 210
265 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?

Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria

Abstract:

Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.

Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state

Procedia PDF Downloads 307
264 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 221
263 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 333
262 Iron Oxide Reduction Using Solar Concentration and Carbon-Free Reducers

Authors: Bastien Sanglard, Simon Cayez, Guillaume Viau, Thomas Blon, Julian Carrey, Sébastien Lachaize

Abstract:

The need to develop clean production processes is a key challenge of any industry. Steel and iron industries are particularly concerned since they emit 6.8% of global anthropogenic greenhouse gas emissions. One key step of the process is the high-temperature reduction of iron ore using coke, leading to large amounts of CO2 emissions. One route to decrease impacts is to get rid of fossil fuels by changing both the heat source and the reducer. The present work aims at investigating experimentally the possibility to use concentrated solar energy and carbon-free reducing agents. Two sets of experimentations were realized. First, in situ X-ray diffraction on pure and industrial powder of hematite was realized to study the phase evolution as a function of temperature during reduction under hydrogen and ammonia. Secondly, experiments were performed on industrial iron ore pellets, which were reduced by NH3 or H2 into a “solar furnace” composed of a controllable 1600W Xenon lamp to simulate and control the solar concentrated irradiation of a glass reactor and of a diaphragm to control light flux. Temperature and pressure were recorded during each experiment via thermocouples and pressure sensors. The percentage of iron oxide converted to iron (called thereafter “reduction ratio”) was found through Rietveld refinement. The power of the light source and the reduction time were varied. Results obtained in the diffractometer reaction chamber show that iron begins to form at 300°C with pure Fe2O3 powder and 400°C with industrial iron ore when maintained at this temperature for 60 minutes and 80 minutes, respectively. Magnetite and wuestite are detected on both powders during the reduction under hydrogen; under ammonia, iron nitride is also detected for temperatures between400°C and 600°C. All the iron oxide was converted to iron for a reaction of 60 min at 500°C, whereas a conversion ratio of 96% was reached with industrial powder for a reaction of 240 min at 600°C under hydrogen. Under ammonia, full conversion was also reached after 240 min of reduction at 600 °C. For experimentations into the solar furnace with iron ore pellets, the lamp power and the shutter opening were varied. An 83.2% conversion ratio was obtained with a light power of 67 W/cm2 without turning over the pellets. Nevertheless, under the same conditions, turning over the pellets in the middle of the experiment permits to reach a conversion ratio of 86.4%. A reduction ratio of 95% was reached with an exposure of 16 min by turning over pellets at half time with a flux of 169W/cm2. Similar or slightly better results were obtained under an ammonia reducing atmosphere. Under the same flux, the highest reduction yield of 97.3% was obtained under ammonia after 28 minutes of exposure. The chemical reaction itself, including the solar heat source, does not produce any greenhouse gases, so solar metallurgy represents a serious way to reduce greenhouse gas emission of metallurgy industry. Nevertheless, the ecological impact of the reducers must be investigated, which will be done in future work.

Keywords: solar concentration, metallurgy, ammonia, hydrogen, sustainability

Procedia PDF Downloads 138