Search results for: Nonlinear Channel equalization
418 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media
Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding
Abstract:
A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation
Procedia PDF Downloads 204417 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 94416 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid
Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong
Abstract:
Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function
Procedia PDF Downloads 100415 Experimental Validation of Computational Fluid Dynamics Used for Pharyngeal Flow Patterns during Obstructive Sleep Apnea
Authors: Pragathi Gurumurthy, Christina Hagen, Patricia Ulloa, Martin A. Koch, Thorsten M. Buzug
Abstract:
Obstructive sleep apnea (OSA) is a sleep disorder where the patient suffers a disturbed airflow during sleep due to partial or complete occlusion of the pharyngeal airway. Recently, numerical simulations have been used to better understand the mechanism of pharyngeal collapse. However, to gain confidence in the solutions so obtained, an experimental validation is required. Therefore, in this study an experimental validation of computational fluid dynamics (CFD) used for the study of human pharyngeal flow patterns during OSA is performed. A stationary incompressible Navier-Stokes equation solved using the finite element method was used to numerically study the flow patterns in a computed tomography-based human pharynx model. The inlet flow rate was set to 250 ml/s and such that a flat profile was maintained at the inlet. The outlet pressure was set to 0 Pa. The experimental technique used for the validation of CFD of fluid flow patterns is phase contrast-MRI (PC-MRI). Using the same computed tomography data of the human pharynx as in the simulations, a phantom for the experiment was 3 D printed. Glycerol (55.27% weight) in water was used as a test fluid at 25°C. Inflow conditions similar to the CFD study were simulated using an MRI compatible flow pump (CardioFlow-5000MR, Shelley Medical Imaging Technologies). The entire experiment was done on a 3 T MR system (Ingenia, Philips) with 108 channel body coil using an RF-spoiled, gradient echo sequence. A comparison of the axial velocity obtained in the pharynx from the numerical simulations and PC-MRI shows good agreement. The region of jet impingement and recirculation also coincide, therefore validating the numerical simulations. Hence, the experimental validation proves the reliability and correctness of the numerical simulations.Keywords: computational fluid dynamics, experimental validation, phase contrast-MRI, obstructive sleep apnea
Procedia PDF Downloads 311414 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 335413 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 114412 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis
Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro
Abstract:
If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis
Procedia PDF Downloads 143411 A Laboratory Study into the Effects of Surface Waves on Freestyle Swimming
Authors: Scott Draper, Nat Benjanuvatra, Grant Landers, Terry Griffiths, Justin Geldard
Abstract:
Open water swimming has been an Olympic sport since 2008 and is growing in popularity world-wide as a low impact form of exercise. Unlike pool swimming, open water swimmers experience a range of different environmental conditions, including surface waves, variable water temperature, aquatic life, and ocean currents. This presentation will describe experimental research to investigate how freestyle swimming behaviour and performance is influenced by surface waves. A group of 12 swimmers were instructed to swim freestyle in the 54 m long wave flume located at The University of Western Australia’s Coastal and Offshore Engineering Laboratory. A variety of different regular waves were simulated, varying in height (up to 0.3 m), period (1.25 – 4s), and direction (with or against the swimmer). Swimmer’s velocity and acceleration, respectively, were determined from video recording and inertial sensors attached to five different parts of the swimmer’s body. The results illustrate how the swimmers stroke rate and the wave encounter frequency influence their forward speed and how particular wave conditions can benefit or hinder performance. Comparisons to simplified mathematical models provide insight into several aspects of performance, including: (i) how much faster swimmers can travel when swimming with as opposed to against the waves, and (ii) why swimmers of lesser ability are expected to be affected proportionally more by waves than elite swimmers. These findings have implications across the spectrum from elite to ‘weekend’ swimmers, including how they are coached and their ability to win (or just successfully complete) iconic open water events such as the Rottnest Channel Swim held annually in Western Australia.Keywords: open water, surface waves, wave height/length, wave flume, stroke rate
Procedia PDF Downloads 112410 Study of the Hydrochemical Composition of Canal, Collector-Drainage and Ground Waters of Kura-Araz Plain and Modeling by GIS Method
Authors: Gurbanova Lamiya
Abstract:
The Republic of Azerbaijan is considered a region with limited water resources, as up to 70% of surface water is formed outside the country's borders, and most of its territory is an arid (dry) climate zone. It is located at the lower limit of transboundary flows, which is the weakest source of natural water resources in the South Caucasus. It is essential to correctly assess the quality of natural, collector-drainage and groundwater of the area and their suitability for irrigation in order to properly carry out land reclamation measures, provide the normal water-salt regime, and prevent repeated salinization. Through the 141-km-long main Mil-Mugan collector, groundwater, household waste, and floodwaters generated during floods and landslides are poured into the Caspian Sea. The hydrochemical composition of the samples taken from the Sabir irrigation canal passing through the center of the Kura-Araz plain, the Main Mil-Mugan Collector, and the groundwater of the region, which we chose as our research object, were studied and the obtained results were compared by periods. A model is proposed that allows for a complete visualization of the primary materials collected for the study area. The practical use of the established digital model provides all possibilities. The practical use of the established digital model provides all possibilities. An extensive database was created with the ArcGis 10.8 package, using publicly available LandSat satellite images as primary data in addition to ground surveys to build the model. The principles of the construction of the geographic information system of modern GIS technology were developed, the boundary and initial condition of the research area were evaluated, and forecasts and recommendations were given.Keywords: irrigation channel, groundwater, collector, meliorative measures
Procedia PDF Downloads 72409 Flood Mapping and Inoudation on Weira River Watershed (in the Case of Hadiya Zone, Shashogo Woreda)
Authors: Alilu Getahun Sulito
Abstract:
Exceptional floods are now prevalent in many places in Ethiopia, resulting in a large number of human deaths and property destruction. Lake Boyo watershed, in particular, had also traditionally been vulnerable to flash floods throughout the Boyo watershed. The goal of this research is to create flood and inundation maps for the Boyo Catchment. The integration of Geographic information system(GIS) technology and the hydraulic model (HEC-RAS) were utilized as methods to attain the objective. The peak discharge was determined using Fuller empirical methodology for intervals of 5, 10, 15, and 25 years, and the results were 103.2 m3/s, 158 m3/s, 222 m3/s, and 252 m3/s, respectively. River geometry, boundary conditions, manning's n value of varying land cover, and peak discharge at various return periods were all entered into HEC-RAS, and then an unsteady flow study was performed. The results of the unsteady flow study demonstrate that the water surface elevation in the longitudinal profile rises as the different periods increase. The flood inundation charts clearly show that regions on the right and left sides of the river with the greatest flood coverage were 15.418 km2 and 5.29 km2, respectively, flooded by 10,20,30, and 50 years. High water depths typically occur along the main channel and progressively spread to the floodplains. The latest study also found that flood-prone areas were disproportionately affected on the river's right bank. As a result, combining GIS with hydraulic modelling to create a flood inundation map is a viable solution. The findings of this study can be used to care again for the right bank of a Boyo River catchment near the Boyo Lake kebeles, according to the conclusion. Furthermore, it is critical to promote an early warning system in the kebeles so that people can be evacuated before a flood calamity happens. Keywords: Flood, Weira River, Boyo, GIS, HEC- GEORAS, HEC- RAS, Inundation MappingKeywords: Weira River, Boyo, GIS, HEC- GEORAS, HEC- RAS, Inundation Mapping
Procedia PDF Downloads 47408 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 397407 Application of Continuum Damage Concept to Simulation of the Interaction between Hydraulic Fractures and Natural Fractures
Authors: Anny Zambrano, German Gonzalez, Yair Quintero
Abstract:
The continuum damage concept is used to study the interaction between hydraulic fractures and natural fractures, the objective is representing the path and relation among this two fractures types and predict its complex behavior without the need to pre-define their direction as occurs in other finite element applications, providing results more consistent with the physical behavior of the phenomenon. The approach uses finite element simulations through Abaqus software to model damage fracturing, the fracturing process by damage propagation in a rock. The modeling the phenomenon develops in two dimensional (2D) so that the fracture will be represented by a line and the crack front by a point. It considers nonlinear constitutive behavior, finite strain, time-dependent deformation, complex boundary conditions, strain hardening and softening, and strain based damage evolution in compression and tension. The complete governing equations are provided and the method is described in detail to permit readers to replicate all results. The model is compared to models that are published and available. Comparisons are focused in five interactions between natural fractures (NF) and hydraulic fractures: Fractured arrested at NF, crossing NF with or without offset, branching at intersecting NFs, branching at end of NF and NF dilation due to shear slippage. The most significant new finding is, that is not necessary to use pre-defined addresses propagation and stress condition can be evaluated as a dominant factor in the process. This is important because it can model in a more real way the generated complex hydraulic fractures, and be a valuable tool to predict potential problems and different geometries of the fracture network in the process of fracturing due to fluid injection.Keywords: continuum damage, hydraulic fractures, natural fractures, complex fracture network, stiffness
Procedia PDF Downloads 343406 Characteristics and Drivers of Greenhouse Gas (GHG) emissions from China’s Manufacturing Industry: A Threshold Analysis
Abstract:
Only a handful of literature have used to non-linear model to investigate the influencing factors of greenhouse gas (GHG) emissions in China’s manufacturing sectors. And there is a limit in investigating quantitatively and systematically the mechanism of correlation between economic development and GHG emissions considering inherent differences among manufacturing sub-sectors. Considering the sectorial characteristics, the manufacturing sub-sectors with various impacts of output on GHG emissions may be explained by different development modes in each manufacturing sub-sector, such as investment scale, technology level and the level of international competition. In order to assess the environmental impact associated with any specific level of economic development and explore the factors that affect GHG emissions in China’s manufacturing industry during the process of economic growth, using the threshold Stochastic Impacts by Regression on Population, Affluence and Technology (STIRPAT) model, this paper investigated the influence impacts of GHG emissions for China’s manufacturing sectors of different stages of economic development. A data set from 28 manufacturing sectors covering an 18-year period was used. Results demonstrate that output per capita and investment scale contribute to increasing GHG emissions while energy efficiency, R&D intensity and FDI mitigate GHG emissions. Results also verify the nonlinear effect of output per capita on emissions as: (1) the Environmental Kuznets Curve (EKC) hypothesis is supported when threshold point RMB 31.19 million is surpassed; (2) the driving strength of output per capita on GHG emissions becomes stronger as increasing investment scale; (3) the threshold exists for energy efficiency with the positive coefficient first and negative coefficient later; (4) the coefficient of output per capita on GHG emissions decreases as R&D intensity increases. (5) FDI shows a reduction in elasticity when the threshold is compassed.Keywords: China, GHG emissions, manufacturing industry, threshold STIRPAT model
Procedia PDF Downloads 428405 Seismic Vulnerability of Structures Designed in Accordance with the Allowable Stress Design and Load Resistant Factor Design Methods
Authors: Mohammadreza Vafaei, Amirali Moradi, Sophia C. Alih
Abstract:
The method selected for the design of structures not only can affect their seismic vulnerability but also can affect their construction cost. For the design of steel structures, two distinct methods have been introduced by existing codes, namely allowable stress design (ASD) and load resistant factor design (LRFD). This study investigates the effect of using the aforementioned design methods on the seismic vulnerability and construction cost of steel structures. Specifically, a 20-story building equipped with special moment resisting frame and an eccentrically braced system was selected for this study. The building was designed for three different intensities of peak ground acceleration including 0.2 g, 0.25 g, and 0.3 g using the ASD and LRFD methods. The required sizes of beams, columns, and braces were obtained using response spectrum analysis. Then, the designed frames were subjected to nine natural earthquake records which were scaled to the designed response spectrum. For each frame, the base shear, story shears, and inter-story drifts were calculated and then were compared. Results indicated that the LRFD method led to a more economical design for the frames. In addition, the LRFD method resulted in lower base shears and larger inter-story drifts when compared with the ASD method. It was concluded that the application of the LRFD method not only reduced the weights of structural elements but also provided a higher safety margin against seismic actions when compared with the ASD method.Keywords: allowable stress design, load resistant factor design, nonlinear time history analysis, seismic vulnerability, steel structures
Procedia PDF Downloads 269404 Evaluation of Possible Application of Cold Energy in Liquefied Natural Gas Complexes
Authors: А. I. Dovgyalo, S. O. Nekrasova, D. V. Sarmin, A. A. Shimanov, D. A. Uglanov
Abstract:
Usually liquefied natural gas (LNG) gasification is performed due to atmospheric heat. In order to produce a liquefied gas a sufficient amount of energy is to be consumed (about 1 kW∙h for 1 kg of LNG). This study offers a number of solutions, allowing using a cold energy of LNG. In this paper it is evaluated the application turbines installed behind the evaporator in LNG complex due to its work additional energy can be obtained and then converted into electricity. At the LNG consumption of G=1000kg/h the expansion work capacity of about 10 kW can be reached. Herewith-open Rankine cycle is realized, where a low capacity cryo-pump (about 500W) performs its normal function, providing the cycle pressure. Additionally discussed an application of Stirling engine within the LNG complex also gives a possibility to realize cold energy. Considering the fact, that efficiency coefficient of Stirling engine reaches 50 %, LNG consumption of G=1000 kg/h may result in getting a capacity of about 142 kW of such a thermal machine. The capacity of the pump, required to compensate pressure losses when LNG passes through the hydraulic channel, will make 500 W. Apart from the above-mentioned converters, it can be proposed to use thermoelectric generating packages (TGP), which are widely used now. At present, the modern thermoelectric generator line provides availability of electric capacity with coefficient of efficiency up to 15%. In the proposed complex, it is suggested to install the thermoelectric generator on the evaporator surface is such a way, that the cold end is contacted with the evaporator’s surface, and the hot one – with the atmosphere. At the LNG consumption of G=1000 kgг/h and specified coefficient of efficiency the capacity of the heat flow Qh will make about 32 kW. The derivable net electric power will be P=4,2 kW, and the number of packages will amount to about 104 pieces. The carried out calculations demonstrate the research perceptiveness in this field of propulsion plant development, as well as allow realizing the energy saving potential with the use of liquefied natural gas and other cryogenics technologies.Keywords: cold energy, gasification, liquefied natural gas, electricity
Procedia PDF Downloads 273403 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 91402 Bahrain Experience in Supporting Small and Medium Enterprises by the Utilization of E-Government
Authors: Najla Alhkalaf
Abstract:
The focus of this study is answering the following question: How do e-government services in Bahrain support the productivity of SMEs? This study examines the current E-government function in enhancing SME productivity in Bahrain through analysing the efficiency of e- government by viewing its facilitators and barriers from the perspective of different stakeholders. The study aims to identify and develop best practice guidelines with the end-goal of creating a standardised channel of communication between e-government and SMEs that fulfil the requirement of SME owners, and thus achieve the prime objective of e-government. E-government services for SMEs have been offered in Bahrain since 2005. However, the current services lack the required mechanism for SMEs to fully take advantage of these services because of lagging communication between service provider and end-user. E-government employees believe that a lack of awareness and trust are the main stumbling block, whereas the SME owners believe that there is a lack of sufficiency in the content and efficiency provided through e- services. A questionnaire has been created based on a pilot study that highlighted the main indicators of e-government efficiency and SMEs productivity as well as previous studies conducted on this subject. This allowed for quantitative data to be extracted. Also interviews were conducted with SME owners and government employees from both case studies, which formed the qualitative data for this study. The findings portray that both the service provider and service receiver largely agree on the existence of most of the technical and administrative barriers. However, the data reflects a level of dissatisfaction from the SME side, which contradicts with the perceived level of satisfaction from the government employees. Therefore, the data supports the argument that assures the existence of a communication gap between stakeholders. To this effect, this research would help build channels of communication between stakeholders, and then induces a plan unlocking the potential of e-government application. The conclusions of this study will help devise an optimised E-government strategy for Bahrain.Keywords: e-government, SME, e-services, G2B, government employees' perspective, entrepreneurs' perspective, enterprise
Procedia PDF Downloads 231401 Fuzzy Climate Control System for Hydroponic Green Forage Production
Authors: Germán Díaz Flórez, Carlos Alberto Olvera Olvera, Domingo José Gómez Meléndez, Francisco Eneldo López Monteagudo
Abstract:
In recent decades, population growth has exerted great pressure on natural resources. Two of the most scarce and difficult to obtain resources, arable land, and water, are closely interrelated, to the satisfaction of the demand for food production. In Mexico, the agricultural sector uses more than 70% of water consumption. Therefore, maximize the efficiency of current production systems is inescapable. It is essential to utilize techniques and tools that will enable us to the significant savings of water, labor and fertilizer. In this study, we present a production module of hydroponic green forage (HGF), which is a viable alternative in the production of livestock feed in the semi-arid and arid zones. The equipment in addition to having a forage production module, has a climate and irrigation control system that operated with photovoltaics. The climate control, irrigation and power management is based on fuzzy control techniques. The fuzzy control provides an accurate method in the design of controllers for nonlinear dynamic physical phenomena such as temperature and humidity, besides other as lighting level, aeration and irrigation control using heuristic information. In this working, firstly refers to the production of the hydroponic green forage, suitable weather conditions and fertigation subsequently presents the design of the production module and the design of the controller. A simulation of the behavior of the production module and the end results of actual operation of the equipment are presented, demonstrating its easy design, flexibility, robustness and low cost that represents this equipment in the primary sector.Keywords: fuzzy, climate control system, hydroponic green forage, forage production module
Procedia PDF Downloads 397400 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options
Authors: Wajih Abbassi, Zouhaier Ben Khelifa
Abstract:
The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options
Procedia PDF Downloads 429399 A User Interface for Easiest Way Image Encryption with Chaos
Authors: D. López-Mancilla, J. M. Roblero-Villa
Abstract:
Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.Keywords: image encryption, chaos, secure communications, user interface
Procedia PDF Downloads 489398 Impact of Sovereign Debt Risk and Corrective Austerity Measures on Private Sector Borrowing Cost in Euro Zone
Authors: Syed Noaman Shah
Abstract:
The current paper evaluates the effect of external public debt risk on the borrowing cost of private non-financial firms in euro zone. Further, the study also treats the impact of austerity measures on syndicated-loan spreads of private firm followed by euro area member states to revive the economic growth in the region. To test these hypotheses, we follow multivariate ordinary least square estimation method to assess the effect of external public debt on the borrowing cost of private firms. By using foreign syndicated-loan issuance data of non-financial private firms from 2005 to 2011, we attempt to gauge how the private financing cost varies with high levels of sovereign external debt prevalent in the euro zone. Our results suggest significant effect of external public debt on the borrowing cost of private firm. In particular, an increase in external public debt by one standard deviation from its sample mean raises syndicated-loan spread by 89 bps. Furthermore, weak creditor rights protection prevalent in member states deepens this effect. However, we do not find any significant effect of domestic public debt on the private sector borrowing cost. In addition, the results show significant effect of austerity measures on private financing cost, both in normal and in crisis period in the euro zone. In particular, one standard deviation change in fiscal consolidation conditional mean reduces the syndicated-loan spread by 22 bps. In turn, it indicates strong presence of credibility channel due to austerity measures in euro area region.Keywords: corporate debt, fiscal consolidation, sovereign debt, syndicated-loan spread
Procedia PDF Downloads 412397 Disclosure Extension of Oil and Gas Reserve Quantum
Authors: Ali Alsawayeh, Ibrahim Eldanfour
Abstract:
This paper examines the extent of disclosure of oil and gas reserve quantum in annual reports of international oil and gas exploration and production companies, particularly companies in untested international markets, such as Canada, the UK and the US, and seeks to determine the underlying factors that affect the level of disclosure on oil reserve quantum. The study is concerned with the usefulness of disclosure of oil and gas reserves quantum to investors and other users. Given the primacy of the annual report (10-k) as a source of supplemental reserves data about the company and as the channel through which companies disseminate information about their performance, the annual reports for one year (2009) were the central focus of the study. This comparative study seeks to establish whether differences exist between the sample companies, based on new disclosure requirements by the Securities and Exchange Commission (SEC) in respect of reserves classification and definition. The extent of disclosure of reserve is provided and compared among the selected companies. Statistical analysis is performed to determine whether any differences exist in the extent of disclosure of reserve under the determinant variables. This study shows that some factors would affect the extent of disclosure of reserve quantum in the above-mentioned countries, namely: company’s size, leverage and quality of auditor. Companies that provide reserves quantum in detail appear to display higher size. The findings also show that the level of leverage has affected companies’ reserves quantum disclosure. Indeed, companies that provide detailed reserves quantum disclosure tend to employ a ‘high-quality auditor’. In addition, the study found significant independent variable such as Profit Sharing Contracts (PSC). This factor could explain variations in the level of disclosure of oil reserve quantum between the contractor and host governments. The implementation of SEC oil and gas reporting requirements do not enhance companies’ valuation because the new rules are based only on past and present reserves information (proven reserves); hence, future valuation of oil and gas companies is missing for the market.Keywords: comparison, company characteristics, disclosure, reserve quantum, regulation
Procedia PDF Downloads 405396 DOG1 Expression Is in Common Human Tumors: A Tissue Microarray Study on More than 15,000 Tissue Samples
Authors: Kristina Jansen, Maximilian Lennartz, Patrick Lebok, Guido Sauter, Ronald Simon, David Dum, Stefan Steurer
Abstract:
DOG1 (Discovered on GIST1) is a voltage-gated calcium-activated chloride and bicarbonate channel that is highly expressed in interstitial cells of Cajal and in gastrointestinal stromal tumors (GIST) derived from Cajal cells. To systematically determine in what tumor entities and normal tissue types DOG1 may be further expressed, a tissue microarray (TMA) containing 15,965 samples from 121 different tumor types and subtypes as well as 608 samples of 76 different normal tissue types were analyzed by immunohistochemistry. DOG1 immunostaining was found in 67 tumor types, including GIST (95.7%), esophageal squamous cell carcinoma (31.9%), pancreatic ductal adenocarcinoma (33.6%), adenocarcinoma of the Papilla Vateri (20%), squamous cell carcinoma of the vulva (15.8%) and the oral cavity (15.3%), mucinous ovarian cancer (15.3%), esophageal adenocarcinoma (12.5%), endometrioid endometrial cancer (12.1%), neuroendocrine carcinoma of the colon (11.1%) and diffuse gastric adenocarcinoma (11%). Low level-DOG1 immunostaining was seen in 17 additional tumor entities. DOG1 expression was unrelated to histopathological parameters of tumor aggressiveness and/or patient prognosis in cancers of the breast (n=1,002), urinary bladder (975), ovary (469), endometrium (173), stomach (233), and thyroid gland (512). High DOG1 expression was linked to estrogen receptor expression in breast cancer (p<0.0001) and the absence of HPV infection in squamous cell carcinomas (p=0.0008). In conclusion, our data identify several tumor entities that can show DOG1 expression levels at similar levels as in GIST. Although DOG1 is tightly linked to a diagnosis of GIST in spindle cell tumors, the differential diagnosis is much broader in DOG1 positive epithelioid neoplasms.Keywords: biomarker, DOG1, immunohistochemistry, tissue microarray
Procedia PDF Downloads 216395 Scientific Expedition to Understand the Crucial Issues of Rapid Lake Expansion and Moraine Dam Instability Phenomena to Justify the Lake Lowering Effort of Imja Lake, Khumbu Region of Sagarmatha, Nepal
Authors: R. C. Tiwari, N. P. Bhandary, D. B. Thapa Chhetri, R. Yatabe
Abstract:
The research enlightens the various issues of lake expansion and stability of the moraine dam of Imja lake. The Imja lake considered that the world highest altitude lake (5010m from m.s.l.), located in the Khumbu, Sagarmatha region of Nepal (27.90 N and 86.90 E) was reported as one of the fast growing glacier lakes in the Nepal Himalaya. The research explores a common phenomenon of lake expansion and stability issues of moraine dam to justify the necessity of lake lowering efforts if any in future in other glacier lakes in Nepal Himalaya. For this, we have explored the root causes of rapid lake expansion along with crucial factors responsible for the stability of moraine mass. This research helps to understand the structure of moraine dam and the ice, water and moraine interactions to the strength of moraine dam. The nature of permafrost layer and its effects on moraine dam stability is also studied here. The detail Geo-Technical properties of moraine mass of Imja lake gives a clear picture of the strength of the moraine material and their interactions. The stability analysis of the moraine dam under the consideration of strong ground motion of 7.8Mw 2015 Barpak-Gorkha and its major aftershock 7.3Mw Kodari, Sindhupalchowk-Dolakha border, Nepal earthquakes have also been carried out here to understand the necessity of lake lowering efforts. The lake lowering effort was recently done by Nepal Army by constructing an open channel and lowered 3m. And, it is believed that the entire region is now safe due to continuous draining of lake water by 3m. But, this option does not seem adequate to offer a significant risk reduction to downstream communities in this much amount of volume and depth, lowering as in the 75 million cubic meter water impounded with an average depth of 148.9m.Keywords: finite element method, glacier, moraine, stability
Procedia PDF Downloads 213394 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 187393 Investigation of Free Vibrations of Opened Shells from Alloy D19: Assistance of the Associated Mass System
Authors: Oleg Ye Sysoyev, Artem Yu Dobryshkin, Nyein Sitt Naing
Abstract:
Cylindrical shells are widely used in the construction of buildings and structures, as well as in the air structure. Thin-walled casings made of aluminum alloys are an effective substitute for reinforced concrete and steel structures in construction. The correspondence of theoretical calculations and the actual behavior of aluminum alloy structures is to ensure their trouble-free operation. In the laboratory of our university, "Building Constructions" conducted an experimental study to determine the effect of the system of attached masses on the natural oscillations of shallow cylindrical shells of aluminum alloys, the results of which were compared with theoretical calculations. The purpose of the experiment is to measure the free oscillations of an open, sloping cylindrical shell for various variations of the attached masses. Oscillations of an open, slender, thin-walled cylindrical shell, rectangular in plan, were measured using induction accelerometers. The theoretical calculation of the shell was carried out on the basis of the equations of motion of the theory of shallow shells, using the Bubnov-Galerkin method. A significant splitting of the flexural frequency spectrum is found, influenced not only by the systems of attached маsses but also by the values of the wave formation parameters, which depend on the relative geometric dimensions of the shell. The correspondence of analytical and experimental data is found, using the example of an open shell of alloy D19, which allows us to speak about the high quality of the study. A qualitative new analytical solution of the problem of determining the value of the oscillation frequency of the shell, carrying a system of attached masses is shown.Keywords: open hollow shell, nonlinear oscillations, associated mass, frequency
Procedia PDF Downloads 295392 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 118391 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 406390 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India
Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah
Abstract:
Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method
Procedia PDF Downloads 239389 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area
Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos
Abstract:
We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.Keywords: computational fluid dynamics, extreme events, loading, tsunami
Procedia PDF Downloads 115