Search results for: Extended arithmetic precision.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2174

Search results for: Extended arithmetic precision.

644 Location Uncertainty – A Probablistic Solution for Automatic Train Control

Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland

Abstract:

New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.

Keywords: ERTMS, CBTC, ATP, ATO

Procedia PDF Downloads 406
643 Artificial Intelligence Impact on Strategic Stability

Authors: Darius Jakimavicius

Abstract:

Artificial intelligence is the subject of intense debate in the international arena, identified both as a technological breakthrough and as a component of the strategic stability effect. Both the kinetic and non-kinetic development of AI and its application in the national strategies of the great powers may trigger a change in the security situation. Artificial intelligence is generally faster, more capable and more efficient than humans, and there is a temptation to transfer decision-making and control responsibilities to artificial intelligence. Artificial intelligence, which, once activated, can select and act on targets without further intervention by a human operator, blurs the boundary between human or robot (machine) warfare, or perhaps human and robot together. Artificial intelligence acts as a force multiplier that speeds up decision-making and reaction times on the battlefield. The role of humans is increasingly moving away from direct decision-making and away from command and control processes involving the use of force. It is worth noting that the autonomy and precision of AI systems make the process of strategic stability more complex. Deterrence theory is currently in a phase of development in which deterrence is undergoing further strain and crisis due to the complexity of the evolving models enabled by artificial intelligence. Based on the concept of strategic stability and deterrence theory, it is appropriate to develop further research on the development and impact of AI in order to assess AI from both a scientific and technical perspective: to capture a new niche in the scientific literature and academic terminology, to clarify the conditions for deterrence, and to identify the potential uses, impacts and possibly quantities of AI. The research problem is the impact of artificial intelligence developed by great powers on strategic stability. This thesis seeks to assess the impact of AI on strategic stability and deterrence principles, with human exclusion from the decision-making and control loop as a key axis. The interaction between AI and human actions and interests can determine fundamental changes in great powers' defense and deterrence, and the development and application of AI-based great powers strategies can lead to a change in strategic stability.

Keywords: artificial inteligence, strategic stability, deterrence theory, decision making loop

Procedia PDF Downloads 33
642 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 35
641 Low-Voltage and Low-Power Bulk-Driven Continuous-Time Current-Mode Differentiator Filters

Authors: Ravi Kiran Jaladi, Ezz I. El-Masry

Abstract:

Emerging technologies such as ultra-wide band wireless access technology that operate at ultra-low power present several challenges due to their inherent design that limits the use of voltage-mode filters. Therefore, Continuous-time current-mode (CTCM) filters have become very popular in recent times due to the fact they have a wider dynamic range, improved linearity, and extended bandwidth compared to their voltage-mode counterparts. The goal of this research is to develop analog filters which are suitable for the current scaling CMOS technologies. Bulk-driven MOSFET is one of the most popular low power design technique for the existing challenges, while other techniques have obvious shortcomings. In this work, a CTCM Gate-driven (GD) differentiator has been presented with a frequency range from dc to 100MHz which operates at very low supply voltage of 0.7 volts. A novel CTCM Bulk-driven (BD) differentiator has been designed for the first time which reduces the power consumption multiple times that of GD differentiator. These GD and BD differentiator has been simulated using CADENCE TSMC 65nm technology for all the bilinear and biquadratic band-pass frequency responses. These basic building blocks can be used to implement the higher order filters. A 6th order cascade CTCM Chebyshev band-pass filter has been designed using the GD and BD techniques. As a conclusion, a low power GD and BD 6th order chebyshev stagger-tuned band-pass filter was simulated and all the parameters obtained from all the resulting realizations are analyzed and compared. Monte Carlo analysis is performed for both the 6th order filters and the results of sensitivity analysis are presented.

Keywords: bulk-driven (BD), continuous-time current-mode filters (CTCM), gate-driven (GD)

Procedia PDF Downloads 254
640 A Multi-Family Offline SPE LC-MS/MS Analytical Method for Anionic, Cationic and Non-ionic Surfactants in Surface Water

Authors: Laure Wiest, Barbara Giroud, Azziz Assoumani, Francois Lestremau, Emmanuelle Vulliet

Abstract:

Due to their production at high tonnages and their extensive use, surfactants are contaminants among those determined at the highest concentrations in wastewater. However, analytical methods and data regarding their occurrence in river water are scarce and concern only a few families, mainly anionic surfactants. The objective of this study was to develop an analytical method to extract and analyze a wide variety of surfactants in a minimum of steps, with a sensitivity compatible with the detection of ultra-traces in surface waters. 27 substances, from 12 families of surfactants, anionic, cationic and non-ionic were selected for method optimization. Different retention mechanisms for the extraction by solid phase extraction (SPE) were tested and compared in order to improve their detection by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The best results were finally obtained with a C18 grafted silica LC column and a polymer cartridge with hydrophilic lipophilic balance (HLB), and the method developed allows the extraction of the three types of surfactants with satisfactory recoveries. The final analytical method comprised only one extraction and two LC injections. It was validated and applied for the quantification of surfactants in 36 river samples. The method's limits of quantification (LQ), intra- and inter-day precision and accuracy were evaluated, and good performances were obtained for the 27 substances. As these compounds have many areas of application, contaminations of instrument and method blanks were observed and considered for the determination of LQ. Nevertheless, with LQ between 15 and 485 ng/L, and accuracy of over 80%, this method was suitable for monitoring surfactants in surface waters. Application on French river samples revealed the presence of anionic, cationic and non-ionic surfactants with median concentrations ranging from 24 ng/L for octylphenol ethoxylates (OPEO) to 4.6 µg/L for linear alkylbenzenesulfonates (LAS). The analytical method developed in this work will therefore be useful for future monitoring of surfactants in waters. Moreover, this method, which shows good performances for anionic, non-ionic and cationic surfactants, may be easily adapted to other surfactants.

Keywords: anionic surfactant, cationic surfactant, LC-MS/MS, non-ionic surfactant, SPE, surface water

Procedia PDF Downloads 135
639 Understanding the Excited State Dynamics of a Phase Transformable Photo-Active Metal-Organic Framework MIP 177 through Time-Resolved Infrared Spectroscopy

Authors: Aneek Kuila, Yaron Paz

Abstract:

MIP 177 LT and HT are two-phase transformable metal organic frameworks consisting of a Ti12O15 oxocluster and a tetracarboxylate ligand that exhibits robust chemical stability and improved photoactivity. LT to HT only shows the changes in dimensionality from 0D to 1D without any change in the overall chemical structure. In terms of chemical and photoactivity MIP 177 LT is found to perform better than the MIP 177HT. Step-scan Fourier transform absorption difference time-resolved spectroscopy has been used to collect mid-IR time-resolved infrared spectra of the transient electronic excited states of a nano-porous metal–organic framework MIP 177-LT and HT with 2.5 ns time resolution. Analyzing the time-resolved vibrational data after 355nm LASER excitation reveals the presence of the temporal changes of ν (O-Ti-O) of Ti-O metal cluster and ν (-COO) of the ligand concluding the fact that these moieties are the ultimate acceptors of the excited charges which are localized over those regions on the nanosecond timescale. A direct negative correlation between the differential absorbance (Δ Absorbance) reveals the charge transfer relation among these two moieties. A longer-lived transient signal up to 180ns for MIP 177 LT compared to the 100 ns of MIP 177 HT shows the extended lifetime of the reactive charges over the surface that exerts in their effectivity. An ultrafast change of bidentate to monodentate bridging in the -COO-Ti-O ligand-metal coordination environment was observed after the photoexcitation of MIP 177 LT which remains and lives with for seconds after photoexcitation is halted. This phenomenon is very unique to MIP 177 LT but not observed with HT. This in-situ change in the coordination denticity during the photoexcitation was not observed previously which can rationalize the reason behind the ability of MIP 177 LT to accumulate electrons during continuous photoexcitation leading to a superior photocatalytic activity.

Keywords: time resolved FTIR, metal organic framework, denticity, photoacatalysis

Procedia PDF Downloads 48
638 Before Decision: Career Motivation of Teacher Candidates

Authors: Pál Iván Szontagh

Abstract:

We suppose that today, the motivation for the career of a pedagogue (including its existential, organizational and infrastructural conditions) is different from the level of commitment to the profession of an educator (which can be experienced informally, or outside of the public education system). In our research, we made efforts to address the widest possible range of student elementary teachers, and to interpret their responses using different filters. In the first phase of our study, we analyzed first-year kindergarten teacher students’ career motivation and commitment to the profession, and in the second phase, that of final-year kindergarten teacher candidates. In the third phase, we conducted surveys to explore students’ motivation for the profession and the career path of a pedagogue in four countries of the Carpathian Basin (Hungary, Slovakia, Romania and Serbia). The surveys were conducted in 17 campuses of 11 Hungarian teacher’s training colleges and universities. Finally, we extended the survey to practicing graduates preparing for their on-the-job rating examination. Based on our results, in all breakdowns, regardless of age group, training institute or - in part - geographical location and nationality, it is proven that lack of social- and financial esteem of the profession poses serious risks for recruitment and retention of teachers. As a summary, we searched for significant differences between the professional- and career motivations of the three respondent groups (kindergarten teacher students, elementary teacher students and practicing teachers), i.e. the motivation factors that change the most with education and/or with the time spent on the job. Based on our results, in all breakdowns, regardless of age group, training institute or - in part - geographical location and nationality, it is proven that lack of social- and financial esteem of the profession poses serious risks for recruitment and retention of teachers.

Keywords: career motivation, career socialization, professional motivation, teacher training

Procedia PDF Downloads 128
637 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 101
636 Beak Size and Asynchronous Hatch in Broiler Chicks

Authors: Mariana Thimotheo, Gabriel Carvalho Ripamonte, Marina De Almeida Nogueira, Silvia Camila Da Costa Aguiar, Marcelo Henrique Santana Ulian, Euclides Braga Malheiros, Isabel Cristina Boleli

Abstract:

Beak plays a fundamental role in the hatching process of the chicks, since it is used for internal and external pipping. The present study examined whether the size of the beak influences the birth period of the broiler chicks in the hatching window. It was analyzed the beak size (length, height and width) of one-hundred twenty nine newly hatched chicks from light eggs (56.22-61.05g) and one-hundred twenty six chicks from heavy eggs (64.95-70.90g), produced by 38 and 45 weeks old broiler breeders (Cobb 500®), respectively. Egg incubation occurred at 37.5°C and 60% RH, with egg turning every hour. Length, height and width of the beaks were measured using a digital caliper (Zaas precision - digital caliper 6", 0.01mm) and the data expressed in millimeters. The beak length corresponded to distance between the tip of the beak and the rictus. The height of the beak was measured in the region of the culmen and its width in the region of the nostrils. Data were analyzed following a 3x2 factorial experimental design, being three birth periods within the hatching window (early: 471.78 to 485.42h, intermediate: 485.43 to 512.27h, and late: 512.28 to 528.72h) and two egg weights (light and heavy). There was a significant interaction between birth period and egg weight for beak height (P < 0.05), which was higher in the intermediate chicks from heavy eggs than in the other chicks from the same egg weight and chicks from light eggs (P < 0.05), that did not differ (P > 0.05). The beak length was influenced only for a birth period, and decreased through the hatch window (early < intermediate < late) (P < 0.05). The width of the beaks was influenced by both main factors, birth period and egg weight (P < 0.05). Early and intermediate chicks had similar beak width, but greater than late chicks, and chicks from heavy eggs presented greater beak width than chicks from light eggs (P < 0.05). In sum, the results show that chicks with longer beak hatch first and that beak length is an important variable for hatch period determination mainly for light eggs.

Keywords: beak dimensions, egg weight, hatching period, hatching window

Procedia PDF Downloads 160
635 LAMOS - Layered Amorphous Metal Oxide Gas Sensors: New Interfaces for Gas Sensing Applications

Authors: Valentina Paolucci, Jessica De Santis, Vittorio Ricci, Giacomo Giorgi, Carlo Cantalini

Abstract:

Despite their potential in gas sensing applications, the major drawback of 2D exfoliated metal dichalcogenides (MDs) is that they suffer from spontaneous oxidation in air, showing poor chemical stability under dry/wet conditions even at room temperature, limiting their practical exploitation. The aim of this work is to validate a synthesis strategy allowing microstructural and electrical stabilization of the oxides that inevitably form on the surface of 2D dichalcogenides. Taking advantage of spontaneous oxidation of MDs in air, we report on liquid phase exfoliated 2D-SnSe2 flakes annealed in static air at a temperature below the crystallization temperature of the native a-SnO2 oxide. This process yields a new class of 2D Layered Amorphous Metal Oxides Sensors (LAMOS), specifically few-layered amorphous a-SnO2, showing excellent gas sensing properties. Sensing tests were carried out at low operating temperature (i.e. 100°C) by exposing a-SnO2 to both oxidizing and reducing gases (i.e. NO2, H2S and H2) and different relative humidities ranging from 40% to 80% RH. The formation of stable nanosheets of amorphous a-SnO2 guarantees excellent reproducibility and stability of the response over one year. These results pave the way to new interesting research perspectives out considering the opportunity to synthesize homogeneous amorphous textures with no grain boundaries, no grains, no crystalline planes with different orientations, etc., following gas sensing mechanisms that likely differ from that of traditional crystalline metal oxide sensors. Moreover, the controlled annealing process could likely be extended to a large variety of Transition Metal Dichalcogenides (TMDs) and Metal Chalcogenides (MCs), where sulfur, selenium, or tellurium atoms can be easily displaced by O2 atoms (ΔG < 0), enabling the synthesis of a new family of amorphous interfaces.

Keywords: layered 2D materials, exfoliation, lamos, amorphous metal oxide sensors

Procedia PDF Downloads 112
634 Advanced Structural Analysis of Energy Storage Materials

Authors: Disha Gupta

Abstract:

The aim of this research is to conduct X-ray and e-beam characterization techniques on lithium-ion battery materials for the improvement of battery performance. The key characterization techniques employed are the synchrotron X-ray Absorption Spectroscopy (XAS) combined with X-ray diffraction (XRD), scanning electron microscopy (SEM) and transmission electron microscopy (TEM) to obtain a more holistic approach to understanding material properties. This research effort provides additional battery characterization knowledge that promotes the development of new cathodes, anodes, electrolyte and separator materials for batteries, hence, leading to better and more efficient battery performance. Both ex-situ and in-situ synchrotron experiments were performed on LiFePO₄, one of the most common cathode material, from different commercial sources and their structural analysis, were conducted using Athena/Artemis software. This analysis technique was then further extended to study other cathode materials like LiMnxFe(₁₋ₓ)PO₄ and even some sulphate systems like Li₂Mn(SO₄)₂ and Li₂Co0.5Mn₀.₅ (SO₄)₂. XAS data were collected for Fe and P K-edge for LiFePO4, and Fe, Mn and P-K-edge for LiMnxFe(₁₋ₓ)PO₄ to conduct an exhaustive study of the structure. For the sulphate system, Li₂Mn(SO₄)₂, XAS data was collected at both Mn and S K-edge. Finite Difference Method for Near Edge Structure (FDMNES) simulations were also conducted for various iron, manganese and phosphate model compounds and compared with the experimental XANES data to understand mainly the pre-edge structural information of the absorbing atoms. The Fe K-edge XAS results showed a charge compensation occurring on the Fe atom for all the differently synthesized LiFePO₄ materials as well as the LiMnxFe(₁₋ₓ)PO₄ systems. However, the Mn K-edge showed a difference in results as the Mn concentration changed in the materials. For the sulphate-based system Li₂Mn(SO₄)₂, however, no change in the Mn K-edge was observed, even though electrochemical studies showed Mn redox reactions.

Keywords: li-ion batteries, electrochemistry, X-ray absorption spectroscopy, XRD

Procedia PDF Downloads 144
633 Comparison Study of Capital Protection Risk Management Strategies: Constant Proportion Portfolio Insurance versus Volatility Target Based Investment Strategy with a Guarantee

Authors: Olga Biedova, Victoria Steblovskaya, Kai Wallbaum

Abstract:

In the current capital market environment, investors constantly face the challenge of finding a successful and stable investment mechanism. Highly volatile equity markets and extremely low bond returns bring about the demand for sophisticated yet reliable risk management strategies. Investors are looking for risk management solutions to efficiently protect their investments. This study compares a classic Constant Proportion Portfolio Insurance (CPPI) strategy to a Volatility Target portfolio insurance (VTPI). VTPI is an extension of the well-known Option Based Portfolio Insurance (OBPI) to the case where an embedded option is linked not to a pure risky asset such as e.g., S&P 500, but to a Volatility Target (VolTarget) portfolio. VolTarget strategy is a recently emerged rule-based dynamic asset allocation mechanism where the portfolio’s volatility is kept under control. As a result, a typical VTPI strategy allows higher participation rates in the market due to reduced embedded option prices. In addition, controlled volatility levels eliminate the volatility spread in option pricing, one of the frequently cited reasons for OBPI strategy fall behind CPPI. The strategies are compared within the framework of the stochastic dominance theory based on numerical simulations, rather than on the restrictive assumption of the Black-Scholes type dynamics of the underlying asset. An extended comparative quantitative analysis of performances of the above investment strategies in various market scenarios and within a range of input parameter values is presented.

Keywords: CPPI, portfolio insurance, stochastic dominance, volatility target

Procedia PDF Downloads 160
632 Field Evaluation of Pile Behavior in Sandy Soil Underlain by Clay

Authors: R. Bakr, M. Elmeligy, A. Ibrahim

Abstract:

When the building loads are relatively small, challenges are often facing the foundation design especially when inappropriate soil conditions exist. These may be represented in the existence of soft soil in the upper layers of soil while sandy soil or firm cohesive soil exist in the deeper layers. In such cases, the design becomes infeasible if the piles are extended to the deeper layers, especially when there are sandy layers existing at shallower depths underlain by stiff clayey soil. In this research, models of piles terminated in sand underlain by clay soils are numerically simulated by different modelling theories. Finite element software, Plaxis 3-D Foundation was used to evaluate the pile behavior under different loading scenarios. The standard static load test according to ASTM D-1143 was simulated and compared with the real-life loading scenario. The results showed that the pile behavior obtained from the current static load test do not realistically represent that obtained from real-life loading. Attempts were carried out to capture the proper numerical loading scenario that simulates the pile behavior in real-life loading including the long-term effect. A modified method based on this research findings is proposed for the static pile loading tests. Field loading tests were carried out to validate the new method. Results obtained from both numerical and field tests by using the modified method prove that this method is more accurate in predicting the pile behavior in sand soil underlain by clay more than the current standard static load.

Keywords: numerical simulation, static load test, pile behavior, sand underlain with clay, creep

Procedia PDF Downloads 315
631 Thermodynamics Analysis of Transcritical HTHP Cycles Using Eco-Friendly Refrigerant and low-Grade Waste Heat Recovery: A Theoretical Evaluation

Authors: Adam Y. Sulaiman, Donal F. Cotter, Ming J. Huang, Neil J. Hewitt

Abstract:

Decarbonization of the industrial sector in developed countries has become indispensable for addressing climate change. Industrial processes including drying, distillation, and injection molding require a process heat exceeding 180°C, rendering the subcriticalHigh-Temperature heat pump(HTHP) technique unsuitable. A transcritical HTHP utilizing ecologically friendly working fluids is a highly recommended system that incorporates the features of high-energy efficiency, extended operational range, and decarbonizing the industrial sector. This paper delves into the possibility and feasibility of leveraging the HTTP system to provide up to 200°C of heat using R1233zd(E) as a working fluid. Using a steady-state model, various transcritical HTHP cycle configurations aretheoretically compared,analyzed, and evaluatedin this study. The heat transfer characteristics for the evaporator and gas cooler are investigated, as well as the cycle's energy, exergetic, and environmental performance. Using the LMTD method, the gas cooler's heat transfer coefficient, overall length, and heat transfer area were calculated. The findings indicate that the heat sink pressure level, as well as the waste heat temperature provided to the evaporator, have a significant impact on overall cycle performance. The investigation revealed the potential challenges and barriers, including the length of the gas cooler and the lubrication of the compression process. The basic transcritical HTTP cycle with additional IHX was demonstrated to be the most efficient cycle across a variety of heat source temperatures ranging from 70 to 90 °C based on theoretical energetic and exergetic performance.

Keywords: high-temperature heat pump, transcritical cycle, refrigerants, gas cooler, energy, exergy

Procedia PDF Downloads 154
630 Characteristics of Meiofaunal Communities in Intertidal Habitats Along Albanian Adriatic Sea Coast

Authors: Fundime Miri, Emanuela Sulaj

Abstract:

Benthic ecosystems constitute important ecological habitats, providing fundamental services for spawning, foraging, and sheltering aquatic organisms. Benthic faunal communities are characterized by a large biological diversity, supported by a great physical variety of benthic habitats. Until late, the study of meiobenthic communities in Albania has been neglectedthus excluding an important component of benthos. The present study aims to bring characteristics of distribution pattern of meiofaunal communities with further focus on nematode genus-based diversity from different intertidal habitats along Albanian Adriatic Sea Coast. The investigation area is extended from Shkodra to Vlora District, including six sandy sampling sites in beaches and areas near river estuaries. Sediment samples were collected manually in low intertidal zone by using a cylindrical corer, with an internal diameter of 5 cm. The richness onmeiofaunalmajor taxon level did not show any significant change between different sampling sites compare to significant changes in nematode diversity at genus level, with distinct nematode assemblages per sampling sites and presence of exclusive genera. All meiofaunal communities under study were dominated by nematodes. Further assessment of functional diversity on nematode assemblages exhibited changes as well on trophic groups and life strategies due to diverse feeding behaviors and c-p values of nematode genera. This study emphasize the need for lower level taxonomic identification of meiofaunal organisms and extending of ecological assessments on trophic diversity and life strategies to understanding functional consequences.

Keywords: benthos, meiofauna, nematode genus-based diversity, functional diversity, intertidal, albanian adriatic coast

Procedia PDF Downloads 141
629 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 365
628 The Role of Cultural Expectations in Emotion Regulation among Nepali Adolescents

Authors: Martha Berg, Megan Ramaiya, Andi Schmidt, Susanna Sharma, Brandon Kohrt

Abstract:

Nepali adolescents report tension and negative emotion due to perceived expectations of both academic and social achievement. These societal goals, which are internalized through early-life socialization, drive the development of self-regulatory processes such as emotion regulation. Emotion dysregulation is linked with adverse psychological outcomes such as depression, self-harm, and suicide, which are public health concerns for organizations working with Nepali adolescents. This study examined the relation among socialization, internalized cultural goals, and emotion regulation to inform interventions for reducing depression and suicide in this population. Participants included 102 students in grades 7 through 9 in a post-earthquake school setting in rural Kathmandu valley. All participants completed a tablet-based battery of quantitative measures, comprising transculturally adapted assessments of emotion regulation, depression, and self-harm/suicide ideation and behavior. Qualitative measures included two focus groups and semi-structured interviews with 22 students and 3 parents. A notable proportion of the sample reported depression symptoms in the past 2 weeks (68%), lifetime self-harm ideation (28%), and lifetime suicide attempts (13%). Students who lived with their nuclear family reported lower levels of difficulty than those who lived with more distant relatives (z=2.16, p=.03), which suggests a link between family environment and adolescent emotion regulation, potentially mediated by socialization and internalization of cultural goals. These findings call for further research into the aspects of nuclear versus extended family environments that shape the development of emotion regulation.

Keywords: adolescent mental health, emotion regulation, Nepal, socialization

Procedia PDF Downloads 262
627 Personalization of Context Information Retrieval Model via User Search Behaviours for Ranking Document Relevance

Authors: Kehinde Agbele, Longe Olumide, Daniel Ekong, Dele Seluwa, Akintoye Onamade

Abstract:

One major problem of most existing information retrieval systems (IRS) is that they provide even access and retrieval results to individual users specially based on the query terms user issued to the system. When using IRS, users often present search queries made of ad-hoc keywords. It is then up to IRS to obtain a precise representation of user’s information need, and the context of the information. In effect, the volume and range of the Internet documents is growing exponentially and consequently causes difficulties for a user to obtain information that precisely matches the user interest. Diverse combination techniques are used to achieve the specific goal. This is due, firstly, to the fact that users often do not present queries to IRS that optimally represent the information they want, and secondly, the measure of a document's relevance is highly subjective between diverse users. In this paper, we address the problem by investigating the optimization of IRS to individual information needs in order of relevance. The paper addressed the development of algorithms that optimize the ranking of documents retrieved from IRS. This paper addresses this problem with a two-fold approach in order to retrieve domain-specific documents. Firstly, the design of context of information. The context of a query determines retrieved information relevance using personalization and context-awareness. Thus, executing the same query in diverse contexts often leads to diverse result rankings based on the user preferences. Secondly, the relevant context aspects should be incorporated in a way that supports the knowledge domain representing users’ interests. In this paper, the use of evolutionary algorithms is incorporated to improve the effectiveness of IRS. A context-based information retrieval system that learns individual needs from user-provided relevance feedback is developed whose retrieval effectiveness is evaluated using precision and recall metrics. The results demonstrate how to use attributes from user interaction behavior to improve the IR effectiveness.

Keywords: context, document relevance, information retrieval, personalization, user search behaviors

Procedia PDF Downloads 451
626 Isolation and Characterisation of Novel Environmental Bacteriophages Which Target the Escherichia coli Lamb Outer Membrane Protein

Authors: Ziyue Zeng

Abstract:

Bacteriophages are viruses which infect bacteria specifically. Over the past decades, phage λ has been extensively studied, especially its interaction with the Escherichia coli LamB (EcLamB) protein receptor. Nonetheless, despite the enormous numbers and near-ubiquity of environmental phages, aside from phage λ, there is a paucity of information on other phages which target EcLamB as a receptor. In this study, to answer the question of whether there are other EcLamB-targeting phages in the natural environment, a simple and convenient method was developed and used for isolating environmental phages which target a particular surface structure of a particular bacterium; in this case, the EcLamB outer membrane protein. From the enrichments with the engineered bacterial hosts, a collection of EcLamB-targeting phages (ΦZZ phages) were easily isolated. Intriguingly, unlike phage λ, an obligate EcLamB-dependent phage in the Siphoviridae family, the newly isolated ΦZZ phages alternatively recognised EcLamB or E. coli OmpC (EcOmpC) as a receptor when infecting E. coli. Furthermore, ΦZZ phages were suggested to represent new species in the Tequatrovirus genus in the Myoviridae family, based on phage morphology and genomic sequences. Most phages are thought to have a narrow host range due to their exquisite specificity in receptor recognition. With the ability to optionally recognise two receptors, ΦZZ phages were considered relatively promiscuous. Via the heterologous expression of EcLamB on the bacterial cell surface, the host range of ΦZZ phages was further extended to three different enterobacterial genera. Besides, an interesting selection of evolved phage mutants with a broader host range was isolated, and the key mutations involved in their evolution to adapt to new hosts were investigated by genomic analysis. Finally, and importantly, two ΦZZ phages were found to be putative generalised transducers, which could be exploited as tools for DNA manipulations.

Keywords: environmental microbiology, phage, microbe-host interactions, microbial ecology

Procedia PDF Downloads 88
625 Evaluation of Pesticide Residues in Honey from Cocoa and Forest Ecosystems in Ghana

Authors: Richard G. Boakye, Dara A Stanley, Mathavan Vickneswaran, Blanaid White

Abstract:

The cultivation of cocoa (Theobroma cocoa), an important cash crop that contributes immensely towards the economic growth of several Western African countries, depends almost entirely on pesticide application owing to the plant’s vulnerability to pest and disease attacks. However, the extent to which pesticides inputted for cocoa cultivation impact bees and bee products has rarely received attention in research. Through this study, the effects of pesticides applied for cocoa cultivation on honey in Ghana were examined by evaluating honey samples from cocoa and forest ecosystems in Ghana. An analysis of five honey samples from each land use type confirmed pesticide contaminants from these land use types at measured concentrations for acetamiprid (0.051mg/kg); imidacloprid (0.004-0.02 mg/kg), thiamethoxam (0.013-0.017 mg/kg); indoxacarb (0.004-0.045 mg/kg) and sulfoxaflor (0.004-0.026 mg/kg). None of the observed pesticide concentrations exceeded EU maximum residue levels, indicating no compromise of the honey quality for human consumption. However, from the results, it could be inferred that toxic effects on bees may not be ruled out because observed concentrations largely exceeded the threshold of 0.001 mg/kg at which sublethal effects on bees have previously been reported. One of the most remarkable results to emerge from this study is the detection of imidacloprid in all honey samples analyzed, with sulfoxaflor and thiamethoxam also being detected in 93% and 73% of the honey samples, respectively. This suggests the probable prevalence of pesticide use in the landscape. However, the conclusions reached in this study should be interpreted within the scope of pesticide applications within Bia West District and not necessarily extended to other cocoa-producing districts in Ghana. Future studies should therefore include multiple cocoa-growing districts and other non-cocoa farming landscapes. Such an approach can give a broader outlook on pesticide residues in honey produced in Ghana.

Keywords: honey, cocoa, pesticides, bees, land use, landscape, residues, Ghana

Procedia PDF Downloads 69
624 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation

Authors: Anshar Ajatasatru

Abstract:

The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.

Keywords: contract rate, cut-fill method, dozer push, overburden volume

Procedia PDF Downloads 302
623 Storage System Validation Study for Raw Cocoa Beans Using Minitab® 17 and R (R-3.3.1)

Authors: Anthony Oppong Kyekyeku, Sussana Antwi-Boasiako, Emmanuel De-Graft Johnson Owusu Ansah

Abstract:

In this observational study, the performance of a known conventional storage system was tested and evaluated for fitness for its intended purpose. The system has a scope extended for the storage of dry cocoa beans. System sensitivity, reproducibility and uncertainties are not known in details. This study discusses the system performance in the context of existing literature on factors that influence the quality of cocoa beans during storage. Controlled conditions were defined precisely for the system to give reliable base line within specific established procedures. Minitab® 17 and R statistical software (R-3.3.1) were used for the statistical analyses. The approach to the storage system testing was to observe and compare through laboratory test methods the quality of the cocoa beans samples before and after storage. The samples were kept in Kilner jars and the temperature of the storage environment controlled and monitored over a period of 408 days. Standard test methods use in international trade of cocoa such as the cut test analysis, moisture determination with Aqua boy KAM III model and bean count determination were used for quality assessment. The data analysis assumed the entire population as a sample in order to establish a reliable baseline to the data collected. The study concluded a statistically significant mean value at 95% Confidence Interval (CI) for the performance data analysed before and after storage for all variables observed. Correlational graphs showed a strong positive correlation for all variables investigated with the exception of All Other Defect (AOD). The weak relationship between the before and after data for AOD had an explained variability of 51.8% with the unexplained variability attributable to the uncontrolled condition of hidden infestation before storage. The current study concluded with a high-performance criterion for the storage system.

Keywords: benchmarking performance data, cocoa beans, hidden infestation, storage system validation

Procedia PDF Downloads 168
622 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 87
621 Ratings of Hand Activity and Force Levels in Identical Hand-Intensive Work Tasks in Women and Men

Authors: Gunilla Dahlgren, Per Liv, Fredrik Öhberg, Lisbeth Slunga Järvholm, Mikael Forsman, Börje Rehn

Abstract:

Background: Accuracy of risk assessment tools in hand-repetitive work is important. This can support precision in the risk management process and for a sustainable working life for women and men equally. Musculoskeletal disorders, MSDs, from the hand, wrist, and forearm, are common in the working population. Women report a higher prevalence of MSDs in these regions. Objective: The objective of this study was to compare if women and men who performed the identical hand-intensive work task were rated equally using the Hand Activity Threshold Limit Value® (HA-TLV) when self-rated and observer-rated. Method: Fifty-six workers from eight companies participated, with various intensities in hand-repetitive work tasks. In total, 18 unique identical hand-intensive work tasks were executed in 28 pairs of a woman and a man. Hand activity and force levels were assessed. Each worker executed the work task for 15 minutes, which was also video recorded. Data was collected on workers who self-rated directly after the execution of the work task. Also, experienced observers performed ratings from videos of the same work tasks. For comparing means between women and men, paired samples t-tests were used. Results: The main results showed that there was no difference in self-ratings of hand activity level and force by women and men who executed the same work task. Further, there was no difference between observer ratings of hand activity level. However, the observer force ratings of women and men differed significantly (p=0.01). Conclusion: Hand activity and force levels are rated equally in women and men when self-rated, also by observers for hand activity. However, it is an observandum that observer force rating is rated higher for women and lower for men. This indicates the need of comparing force ratings with technical measures.

Keywords: gender, equity, sex differences, repetitive strain injury, cumulative trauma disorders, upper extremity, exposure assessment, workload, health risk assessment, observation, psychophysics

Procedia PDF Downloads 116
620 A Case of Mantle Cell Lymphoma Presenting With GI Symptoms and Noted to Have Extranodal Involvement of the Stomach and Colon on Presentation

Authors: Saba Amreen Syeda, Summaiah Asim, Syeda, Hafsa, Essam Quraishi

Abstract:

Mantle Cell Lymphoma (MCL) is a relatively uncommon type of lymphoma that comprises approximately 7 percent of non hodgkin's lymphomas (NHL), Classic MCL presents mostly in lymph nodes and occasionally in extranodal sites. About 26 % of MCL is present primarily in the Gastrointestinal tract. While both the upper GI tract and the lower GI tract could be involved, it is rare to present with concurrent upper and lower GI involvement with MCL. We present the case of a 51-year-old Asian Indian male that presented to our clinic with complaints of chronic diarrhea for the last one year, progressively worsening over the past three months. The Patient also reported black stool as well as bright red blood per rectum. Patient reported severe fatigue on minimal exertion. On a physical exam, the patient was noted to have matted lymphadenopathy in the neck. Patient was noted to be anemic with a hemoglobin to be 8 g/dl. Esophagogastroduodenoscopy and colonoscopy was performed. EGD showed a large 4 cm ulcer in the gastric antrum with thick heaped up edges. There was bleeding on contact. Colonoscopy showed a large 35 mm multilobulated polyp in the ascending colon, which was biopsied. The patient was also noted to have nodular proctitis in the mid rectum. This was localized and extended to about 5 cm. This area was biopsied as well. Biopsies from the stomach, colon, as well as the rectum, returned with findings of mantle cell lymphoma on pathology. Lymphoid cells in the biopsy were stained strongly positive for CD 20, cyclin D1, and CD 5. There was the absence of stain for CD 3 and CD 10. The IHC stain for CD 23 was negative. Biopsies from neck LAD were obtained and were also positive for MCL. The patient was referred to oncology for staging and treatment.

Keywords: mantle cell lymphoma, GI bleed, diarrhea, gastric ulcer, colon polyp

Procedia PDF Downloads 142
619 Intermediate-Term Impact of Taiwan High-Speed Rail (HSR) and Land Use on Spatial Patterns of HSR Travel

Authors: Tsai Yu-hsin, Chung Yi-Hsin

Abstract:

The employment of an HSR system, resulting in elevation in the inter-city/-region accessibility, is likely to promote spatial interaction between places in the HSR and extended territory. The inter-city/-region travel via HSR could be, among others, affected by the land use, transportation, and location of the HSR station at both trip origin and destination ends. However, relatively few insights have been shed on these impacts and spatial patterns of the HSR travel. The research purposes, as phase one of a series of HSR related research, of this study are threefold: to analyze the general spatial patterns of HSR trips, such as the spatial distribution of trip origins and destinations; to analyze if specific land use, transportation characteristics, and trip characteristics affect HSR trips in terms of the use of HSR, the distribution of trip origins and destinations, and; to analyze the socio-economic characteristics of HSR travelers. With the Taiwan HSR starting operation in 2007, this study emphasizes on the intermediate-term impact of HSR, which is made possible with the population and housing census and industry and commercial census data and a station area intercept survey conducted in the summer 2014. The analysis will be conducted at the city, inter-city, and inter-region spatial levels, as necessary and required. The analysis tools include descriptive statistics and multivariate analysis with the assistance of SPSS, HLM and ArcGIS. The findings, on the one hand, can provide policy implications for associated land use, transportation plan and the site selection of HSR station. On the other hand, on the travel the findings are expected to provide insights that can help explain how land use and real estate values could be affected by HSR in following phases of this series of research.

Keywords: high speed rail, land use, travel, spatial pattern

Procedia PDF Downloads 453
618 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 78
617 Tamukkana, Ancient Achaemenids City near the Persian Gulf

Authors: Ghulamhossein Nezami

Abstract:

Civilizations based in Iran, especially in the south, have always realized the all-around importance of the Persian Sea and for various reasons, have paid full attention to it. The first of these was the pre-Aryan government, Ilam in the coastal province of Sharihum and the city of Lian (now the port of Bushehr) in terms of trade, defense and religion. With the establishment of the Achaemenids on the entire plateau of Iran to the center of Persia, they created several communication routes from Parseh to the shores of the Persian Gulf, which ended in the present Bushehr province. This coastal area was extended by a road in the coastal plain to the more southern parts of the ports of Ausinze - according to Ptolemy the port of Siraf before the Sassanids - and Epstane and Hormozia in the present-day Strait of Hormuz. Meanwhile, the ancient city of Temukknana, whose new historical documents testify to its extraordinary importance in the Achaemenid period, especially Darius I of the Achaemenids, from a strategic position with the coastal areas, the coasts and on the other hand with the gamers, the political center. - Achaemenid administration, had. New archeological evidence, research, and excavations show that both the famous Achaemenid kings and courtiers paid special attention to Tamukknana. The discovery of a tomb and three Achaemenid palaces from before the reign of Cyrus to Xerxes in this region showed the importance of the strategic, security-defense and commercial position of this region, extraordinary for the Achaemenids. Therefore, the city of Temukkana in the Dashtestan region of present-day Bushehr province became an important Achaemenid center on the Persian Gulf coast and became the political-economic center of gravity of the Achaemenids and the regulator of communication networks on the Persian Gulf coast. This event showed that the Achaemenids attached importance to their economic goals and oversight of their vast territory by the Persian Gulf. Methods: Book resources and field study.

Keywords: Achaemenids, Bushehr, Persian Gulf, Tamukkana

Procedia PDF Downloads 183
616 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 193
615 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 211