Search results for: weather variation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3157

Search results for: weather variation

547 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 194
546 Hybrid Renewable Energy Systems for Electricity and Hydrogen Production in an Urban Environment

Authors: Same Noel Ngando, Yakub Abdulfatai Olatunji

Abstract:

Renewable energy micro-grids, such as those powered by solar or wind energy, are often intermittent in nature. This means that the amount of energy generated by these systems can vary depending on weather conditions or other factors, which can make it difficult to ensure a steady supply of power. To address this issue, energy storage systems have been developed to increase the reliability of renewable energy micro-grids. Battery systems have been the dominant energy storage technology for renewable energy micro-grids. Batteries can store large amounts of energy in a relatively small and compact package, making them easy to install and maintain in a micro-grid setting. Additionally, batteries can be quickly charged and discharged, allowing them to respond quickly to changes in energy demand. However, the process involved in recycling batteries is quite costly and difficult. An alternative energy storage system that is gaining popularity is hydrogen storage. Hydrogen is a versatile energy carrier that can be produced from renewable energy sources such as solar or wind. It can be stored in large quantities at low cost, making it suitable for long-distance mass storage. Unlike batteries, hydrogen does not degrade over time, so it can be stored for extended periods without the need for frequent maintenance or replacement, allowing it to be used as a backup power source when the micro-grid is not generating enough energy to meet demand. When hydrogen is needed, it can be converted back into electricity through a fuel cell. Energy consumption data is got from a particular residential area in Daegu, South Korea, and the data is processed and analyzed. From the analysis, the total energy demand is calculated, and different hybrid energy system configurations are designed using HOMER Pro (Hybrid Optimization for Multiple Energy Resources) and MATLAB software. A techno-economic and environmental comparison and life cycle assessment (LCA) of the different configurations using battery and hydrogen as storage systems are carried out. The various scenarios included PV-hydrogen-grid system, PV-hydrogen-grid-wind, PV-hydrogen-grid-biomass, PV-hydrogen-wind, PV-hydrogen-biomass, biomass-hydrogen, wind-hydrogen, PV-battery-grid-wind, PV- battery -grid-biomass, PV- battery -wind, PV- battery -biomass, and biomass- battery. From the analysis, the least cost system for the location was the PV-hydrogen-grid system, with a net present cost of about USD 9,529,161. Even though all scenarios were environmentally friendly, taking into account the recycling cost and pollution involved in battery systems, all systems with hydrogen as a storage system produced better results. In conclusion, hydrogen is becoming a very prominent energy storage solution for renewable energy micro-grids. It is easier to store compared with electric power, so it is suitable for long-distance mass storage. Hydrogen storage systems have several advantages over battery systems, including flexibility, long-term stability, and low environmental impact. The cost of hydrogen storage is still relatively high, but it is expected to decrease as more hydrogen production, and storage infrastructure is built. With the growing focus on renewable energy and the need to reduce greenhouse gas emissions, hydrogen is expected to play an increasingly important role in the energy storage landscape.

Keywords: renewable energy systems, microgrid, hydrogen production, energy storage systems

Procedia PDF Downloads 80
545 The Acquisition of Temporality in Italian Child Language: Case Study of Child Frog Story Narratives

Authors: Gabriella Notarianni Burk

Abstract:

The present study investigates the Aspect Hypothesis (AH) in Italian child language in the production of frog story narratives from the CHILDES database. The AH is based on the assumption that children initially encode aspectual and lexical distinctions rather than temporal relations. Children from a variety of first languages have been shown to mark past initially with achievements and accomplishments (telic predicates) and in later stages with states and activities (atelic predicates). Aspectual distinctions in Romance languages are obligatorily and overtly encoded in the inflectional morphology. In Italian the perfective viewpoint is realized by the passato prossimo, which expresses a temporal and aspectual meaning of pastness and perfectivity, whereas the imperfective viewpoint in the past tense is realized by the imperfetto. The aim of this study is to assess the role of lexical aspect in the acquisition of tense and aspect morphology and to understand if Italian children’s mapping of aspectual and temporal distinctions follows consistent developmental patterns across languages. The research methodology aligns with the cross-linguistic designs, tasks and coding procedures previously developed in the frog story literature. Results from two-factor ANOVA show that Italian children (age range: 4-6) exhibited a statistically significant distinction between foregrounded perfective and backgrounded imperfective marking. However, a closer examination of the sixty narratives reveals an idiosyncratic production pattern for Italian children, whereby the marking of imperfetto deviates from the tenets of AH and emerges as deictic tense to entail completed and bounded events in foreground clauses. Instances of ‘perfective’ uses of imperfetto were predominantly found in the four-year old narratives (25%). Furthermore, the analysis of the perfective marking suggests that morphological articulation and diatopic variation may influence the child production of formal linguistic devices in discourse.

Keywords: actionality, aspect, grounding, temporal reference

Procedia PDF Downloads 229
544 Topography Effects on Wind Turbines Wake Flow

Authors: H. Daaou Nedjari, O. Guerri, M. Saighi

Abstract:

A numerical study was conducted to optimize the positioning of wind turbines over complex terrains. Thus, a two-dimensional disk model was used to calculate the flow velocity deficit in wind farms for both flat and complex configurations. The wind turbine wake was assessed using the hybrid methods that combine CFD (Computational Fluid Dynamics) with the actuator disc model. The wind turbine rotor has been defined with a thrust force, coupled with the Navier-Stokes equations that were resolved by an open source computational code (Code_Saturne V3.0 developed by EDF) The simulations were conducted in atmospheric boundary layer condition considering a two-dimensional region located at the north of Algeria at 36.74°N longitude, 02.97°E latitude. The topography elevation values were collected according to a longitudinal direction of 1km downwind. The wind turbine sited over topography was simulated for different elevation variations. The main of this study is to determine the topography effect on the behavior of wind farm wake flow. For this, the wake model applied in complex terrain needs to selects the singularity effects of topography on the vertical wind flow without rotor disc first. This step allows to determine the existence of mixing scales and friction forces zone near the ground. So, according to the ground relief the wind flow waS disturbed by turbulence and a significant speed variation. Thus, the singularities of the velocity field were thoroughly collected and thrust coefficient Ct was calculated using the specific speed. In addition, to evaluate the land effect on the wake shape, the flow field was also simulated considering different rotor hub heights. Indeed, the distance between the ground and the hub height of turbine (Hhub) was tested in a flat terrain for different locations as Hhub=1.125D, Hhub = 1.5D and Hhub=2D (D is rotor diameter) considering a roughness value of z0=0.01m. This study has demonstrated that topographical farm induce a significant effect on wind turbines wakes, compared to that on flat terrain.

Keywords: CFD, wind turbine wake, k-epsilon model, turbulence, complex topography

Procedia PDF Downloads 553
543 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics

Authors: Varun K, Pramod B. Balareddy

Abstract:

Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.

Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient

Procedia PDF Downloads 240
542 Mapping the Digital Landscape: An Analysis of Party Differences between Conventional and Digital Policy Positions

Authors: Daniel Schwarz, Jan Fivaz, Alessia Neuroni

Abstract:

Although digitization is a buzzword in almost every election campaign, the political parties leave voters largely in the dark about their specific positions on digital issues. In the run-up to the 2019 elections in Switzerland, the ‘Digitization Monitor’ project (DMP) was launched in order to change this situation. Within the framework of the DMP, all 4,736 candidates were surveyed about their digital policy positions and values. The DMP is designed as a digital policy supplement to the existing ‘smartvote’ voting advice application. This enabled a direct comparison of the digital policy attitudes according to the DMP with the topics of the ‘smartvote’ questionnaire which are comprehensive in content but mainly related to conventional policy areas. This paper’s main research goal is to analyze and visualize possible differences between conventional and digital policy areas in terms of response patterns between and within political parties. The analysis is based on dimensionality reduction methods (multidimensional scaling and principal component analysis) for the visualization of inter-party differences, and on standard deviation as a measure of variation for the evaluation of intra-party unity. The results reveal that digital issues show a lower degree of inter-party polarization compared to conventional policy areas. Thus, the parties have more common ground in issues on digitization than in conventional policy areas. In contrast, the study reveals a mixed picture regarding intra-party unity. Homogeneous parties show a lower degree of unity in digitization issues whereas parties with heterogeneous positions in conventional areas have more united positions in digital areas. All things considered, the findings are encouraging as less polarized conditions apply to the debate on digital development compared to conventional politics. For the future, it would be desirable if in further countries similar projects to the DMP could emerge to broaden the basis for conclusions.

Keywords: comparison of political issue dimensions, digital awareness of candidates, digital policy space, party positions on digital issues

Procedia PDF Downloads 167
541 Degradation of Emerging Pharmaceuticals by Gamma Irradiation Process

Authors: W. Jahouach-Rabai, J. Aribi, Z. Azzouz-Berriche, R. Lahsni, F. Hosni

Abstract:

Gamma irradiation applied in removing pharmaceutical contaminants from wastewater is an effective advanced oxidation process (AOP), considered as an alternative to conventional water treatment technologies. In this purpose, the degradation efficiency of several detected contaminants under gamma irradiation was evaluated. In fact, radiolysis of organic pollutants in aqueous solutions produces powerful reactive species, essentially hydroxyl radical ( ·OH), able to destroy recalcitrant pollutants in water. Pharmaceuticals considered in this study are aqueous solutions of paracetamol, ibuprofen, and diclofenac at different concentrations 0.1-1 mmol/L, which were treated with irradiation doses from 3 to 15 kGy. The catalytic oxidation of these compounds by gamma irradiation was investigated using hydrogen peroxide (H₂O₂) as a convenient oxidant. Optimization of the main parameters influencing irradiation process, namely irradiation doses, initial concentration and oxidant volume (H₂O₂) were investigated, in the aim to release high degradation efficiency of considered pharmaceuticals. Significant modifications attributed to these parameters appeared in the variation of degradation efficiency, chemical oxygen demand removal (COD) and concentration of radio-induced radicals, confirming them synergistic effect to attempt total mineralization. Pseudo-first-order reaction kinetics could be used to depict the degradation process of these compounds. A sophisticated analytical study was released to quantify the detected radio-induced radicals (electron paramagnetic resonance spectroscopy (EPR) and high performance liquid chromatography (HPLC)). All results showed that this process is effective for the degradation of many pharmaceutical products in aqueous solutions due to strong oxidative properties of generated radicals mainly hydroxyl radical. Furthermore, the addition of an optimal amount of H₂O₂ was efficient to improve the oxidative degradation and contribute to the high performance of this process at very low doses (0.5 and 1 kGy).

Keywords: AOP, COD, hydroxyl radical, EPR, gamma irradiation, HPLC, pharmaceuticals

Procedia PDF Downloads 159
540 Voice Quality in Italian-Speaking Children with Autism

Authors: Patrizia Bonaventura, Magda Di Renzo

Abstract:

This project aims to measure and assess the voice quality in children with autism. Few previous studies exist which have analyzed the voice quality of individuals with autism: abnormal voice characteristics have been found, like a high pitch, great pitch range, and sing-song quality. Existing studies did not focus specifically on Italian-speaking children’s voices and provided analysis of a few acoustic parameters. The present study aimed to gather more data and to perform acoustic analysis of the voice of children with autism in order to identify patterns of abnormal voice features that might shed some light on the causes of the dysphonia and possibly be used to create a pediatric assessment tool for early identification of autism. The participants were five native Italian-speaking boys with autism between the age of 4 years and 10 years (mean 6.8 ± SD 1.4). The children had a diagnosis of autism, were verbal, and had no other comorbid conditions (like Down syndrome or ADHD). The voices of the autistic children were recorded in the production of sustained vowels [ah] and [ih] and of sentences from the Italian version of the CAPE-V voice assessment test. The following voice parameters, representative of normal quality, were analyzed by acoustic spectrography through Praat: Speaking Fundamental Frequency, F0 range, average intensity, and dynamic range. The results showed that the pitch parameters (Speaking Fundamental Frequency and F0 range), as well as the intensity parameters (average intensity and dynamic range), were significantly different from the relative normal reference thresholds. Also, variability among children was found, so confirming a tendency revealed in previous studies of individual variation in these aspects of voice quality. The results indicate a general pattern of abnormal voice quality characterized by a high pitch and large variations in pitch and intensity. These acoustic voice characteristics found in Italian-speaking autistic children match those found in children speaking other languages, indicating that autism symptoms affecting voice quality might be independent of the native language of the children.

Keywords: autism, voice disorders, speech science, acoustic analysis of voice

Procedia PDF Downloads 56
539 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes

Authors: Lucas Paganin, Viliam Makis

Abstract:

With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.

Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart

Procedia PDF Downloads 79
538 Evaluation of Mechanical Properties and Surface Roughness of Nanofilled and Microhybrid Composites

Authors: Solmaz Eskandarion, Haniyeh Eftekhar, Amin Fallahi

Abstract:

Introduction: Nowadays cosmetic dentistry has gained greater attention because of the changing demands of dentistry patients. Composite resin restorations play an important role in the field of esthetic restorations. Due to the variation between the resin composites, it is important to be aware of their mechanical properties and surface roughness. So, the aim of this study was to compare the mechanical properties (surface hardness, compressive strength, diametral tensile strength) and surface roughness of four kinds of resin composites after thermal aging process. Materials and Method: 10 samples of each composite resins (Gradia-direct (GC), Filtek Z250 (3M), G-ænial (GC), Filtek Z350 (3M- filtek supreme) prepared for evaluation of each properties (totally 120 samples). Thermocycling (with temperature 5 and 55 degree of centigrade and 10000 cycles) were applied. Then, the samples were tested about their compressive strength and diametral tensile strength using UTM. And surface hardness was evaluated with Microhardness testing machine. Either surface roughness was evaluated with Scanning electron microscope after surface polishing. Result: About compressive strength (CS), Filtek Z250 showed the highest value. But there were not any significant differences between 4 groups about CS. Either Filtek Z250 detected as a composite with highest value of diametral tensile strength (DTS) and after that highest to lowest DTS was related to: Filtek Z350, G-ænial and Gradia-direct. And about DTS all of the groups showed significant differences (P<0.05). Vickers Hardness Number (VHN) of Filtek Z250 was the greatest. After that Filtek Z350, G-ænial and Gradia-direct followed it. The surface roughness of nano-filled composites was less than Microhybrid composites. Either the surface roughness of GC Ganial was a little greater than Filtek Z250. Conclusion: This study indicates that there is not any evident significant difference between the groups amoung their mechanical properties. But it seems that Filtek Z250 showed slightly better mechanical properties. About surface roughness, nanofilled composites were better that Microhybrid.

Keywords: mechanical properties, surface roughness, resin composite, compressive strength, thermal aging

Procedia PDF Downloads 343
537 Experimental Investigation on the Role of Thermoacoustics on Soot Formation

Authors: Sambit Supriya Dash, Rahul Ravi R, Vikram Ramanan, Vinayak Malhotra

Abstract:

Combustion in itself is a complex phenomenon that involves the interaction and interplay of multiple phenomena, the combined effect of which gives rise to the common flame that we see and use in our daily life applications from cooking to propelling our vehicles to space. The most important thing that goes unnoticed about these flames is the effect of the various phenomena from its surrounding environment that affects its behavior and properties. These phenomena cause a variety of energy interactions that lead to various types of energy transformations which in turn affect the flame behavior. This paper focuses on experimentally investigating the effect of one such phenomenon, which is the acoustics or sound energy on diffusion flames. The subject in itself is extensively studied upon as thermo-acoustics globally, whereas the current work focuses on studying its effect on soot formation on diffusion flames. The said effect is studied in this research work by the use of a butane as fuel, fitted with a nozzle that houses 3 arrays consisting of 4 holes each that are placed equidistant to each other and the resulting flame impinged with sound from two independent and similar sound sources that are placed equidistant from the centre of the flame. The entire process is systematically video graphed using a 60 fps regular CCD and analysed for variation in flame heights and flickering frequencies where the fuel mass flow rate is maintained constant and the configuration of entrainment holes and frequency of sound are varied, whilst maintaining constant ambient atmospheric conditions. The current work establishes significant outcomes on the effect of acoustics on soot formation; it is noteworthy that soot formation is the main cause of pollution and a major cause of inefficiency of current propulsion systems. This work is one of its kinds, and its outcomes are widely applicable to commercial and domestic appliances that utilize combustion for energy generation or propulsion and help us understand them better, so that we can increase their efficiency and decrease pollution.

Keywords: thermoacoustics, entrainment, propulsion system, efficiency, pollution

Procedia PDF Downloads 150
536 The Role of Knowledge Management in Innovation: Spanish Evidence

Authors: María Jesús Luengo-Valderrey, Mónica Moso-Díez

Abstract:

In the knowledge-based economy, innovation is considered essential in order to achieve survival and growth in organizations. On the other hand, knowledge management is currently understood as one of the keys to innovation process. Both factors are generally admitted as generators of competitive advantage in organizations. Specifically, activities on R&D&I and those that generate internal knowledge have a positive influence in innovation results. This paper examines this effect and if it is similar or not is what we aimed to quantify in this paper. We focus on the impact that proportion of knowledge workers, the R&D&I investment, the amounts destined for ICTs and training for innovation have on the variation of tangible and intangibles returns for the sector of high and medium technology in Spain. To do this, we have performed an empirical analysis on the results of questionnaires about innovation in enterprises in Spain, collected by the National Statistics Institute. First, using clusters methodology, the behavior of these enterprises regarding knowledge management is identified. Then, using SEM methodology, we performed, for each cluster, the study about cause-effect relationships among constructs defined through variables, setting its type and quantification. The cluster analysis results in four groups in which cluster number 1 and 3 presents the best performance in innovation with differentiating nuances among them, while clusters 2 and 4 obtained divergent results to a similar innovative effort. However, the results of SEM analysis for each cluster show that, in all cases, knowledge workers are those that affect innovation performance most, regardless of the level of investment, and that there is a strong correlation between knowledge workers and investment in knowledge generation. The main findings reached is that Spanish high and medium technology companies improve their innovation performance investing in internal knowledge generation measures, specially, in terms of R&D activities, and underinvest in external ones. This, and the strong correlation between knowledge workers and the set of activities that promote the knowledge generation, should be taken into account by managers of companies, when making decisions about their investments for innovation, since they are key for improving their opportunities in the global market.

Keywords: high and medium technology sector, innovation, knowledge management, Spanish companies

Procedia PDF Downloads 224
535 FEM Simulation of Tool Wear and Edge Radius Effects on Residual Stress in High Speed Machining of Inconel718

Authors: Yang Liu, Mathias Agmell, Aylin Ahadi, Jan-Eric Stahl, Jinming Zhou

Abstract:

Tool wear and tool geometry have significant effects on the residual stresses in the component produced by high-speed machining. In this paper, Coupled Eulerian and Lagrangian (CEL) model is adopted to investigate the residual stress in high-speed machining of Inconel718 with a CBN170 cutting tool. The result shows that the mesh with the smallest size of 5 um yields cutting forces and chip morphology in close agreement with the experimental data. The analysis of thermal loading and mechanical loading are performed to study the effect of segmented chip morphology on the machined surface topography and residual stress distribution. The effects of cutting edge radius and flank wear on residual stresses formation and distribution on the workpiece were also investigated. It is found that the temperature within 100um depth of the machined surface increases drastically due to the more friction heat generation with the contact area of tool and workpiece increasing when a larger edge radius and flank wear are used. With the depth further increasing, the temperature drops rapidly for all cases due to the low conductivity of Inconel718. Consequently, higher and deeper tensile residual stress is generated on the superficial. Furthermore, an increased depth of plastic deformation and compressive residual stress is noticed in the subsurface, which is attributed to the reduction of the yield strength under the thermal effect. Besides, the ploughing effect produced by a larger tool edge radius contributes more than flank wear. The magnitude variation of the compressive residual stress caused by various edge radius and flank wear have a totally opposite trend, which depends on the magnitude of the ploughing and friction pressure acting on the machined surface.

Keywords: Coupled Eulerian Lagrangian, segmented chip, residual stress, tool wear, edge radius, Inconel718

Procedia PDF Downloads 136
534 Hidden Hot Spots: Identifying and Understanding the Spatial Distribution of Crime

Authors: Lauren C. Porter, Andrew Curtis, Eric Jefferis, Susanne Mitchell

Abstract:

A wealth of research has been generated examining the variation in crime across neighborhoods. However, there is also a striking degree of crime concentration within neighborhoods. A number of studies show that a small percentage of street segments, intersections, or addresses account for a large portion of crime. Not surprisingly, a focus on these crime hot spots can be an effective strategy for reducing community level crime and related ills, such as health problems. However, research is also limited in an important respect. Studies tend to use official data to identify hot spots, such as 911 calls or calls for service. While the use of call data may be more representative of the actual level and distribution of crime than some other official measures (e.g. arrest data), call data still suffer from the 'dark figure of crime.' That is, there is most certainly a degree of error between crimes that occur versus crimes that are reported to the police. In this study, we present an alternative method of identifying crime hot spots, that does not rely on official data. In doing so, we highlight the potential utility of neighborhood-insiders to identify and understand crime dynamics within geographic spaces. Specifically, we use spatial video and geo-narratives to record the crime insights of 36 police, ex-offenders, and residents of a high crime neighborhood in northeast Ohio. Spatial mentions of crime are mapped to identify participant-identified hot spots, and these are juxtaposed with calls for service (CFS) data. While there are bound to be differences between these two sources of data, we find that one location, in particular, a corner store, emerges as a hot spot for all three groups of participants. Yet it does not emerge when we examine CFS data. A closer examination of the space around this corner store and a qualitative analysis of narrative data reveal important clues as to why this store may indeed be a hot spot, but not generate disproportionate calls to the police. In short, our results suggest that researchers who rely solely on official data to study crime hot spots may risk missing some of the most dangerous places.

Keywords: crime, narrative, video, neighborhood

Procedia PDF Downloads 225
533 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 130
532 Study of Three-Dimensional Computed Tomography of Frontoethmoidal Cells Using International Frontal Sinus Anatomy Classification

Authors: Prabesh Karki, Shyam Thapa Chettri, Bajarang Prasad Sah, Manoj Bhattarai, Sudeep Mishra

Abstract:

Introduction: Frontal sinus is frequently described as the most difficult sinus to access surgically due to its proximity to the cribriform plate, orbit, and anterior ethmoid artery. Frontal sinus surgery requires a detailed understanding of the cellular structure and FSDP unique to each patient, making high-resolution CT scans an indispensable tool to assess the difficulty of planned sinus surgery. International Frontal Sinus Anatomy Classification (IFAC) was developed to provide a more precise nomenclature for cells in the frontal recess, classifying cells based on their anatomic origin. Objectives: To assess the proportion of frontal cell variants defined by IFAC, variation with respect to age and gender. Methods: 54 cases were enrolled after a detailed clinical history, thorough general and physical examinations, and CT a report ordered in a film. Assessment and tabulation of the presence of frontal cells according to the IFAC analyzed. The prevalence of each cell type was calculated, and data were entered in MS Excel and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive statistics and frequencies were defined for categorical and numerical variables. Frequency, percentage, the mean and standard deviation were calculated. Result: Among 54 patients, 30 (55.6%) were male and 24 (44.4%) were female. The patient enrolled ranged from 18 to 78 years. Majority33.3% (n=18) were in age group of >50 years.According to IFAC, Agger nasi cells (92.6%) were most common, whereas supraorbital ethmoidal cells were least common 16 (29.6%). Prevalence of other frontoethmoidal cells was SAC- 57.4%, SAFC- 38.9%, SBC- 74.1%, SBFC- 33.3%, FSC- 38.9% of 54 cases. Conclusion: IFAC is an international consensus document that describes an anatomically precise nomenclature for classifying frontoethmoidal cells' anatomy. This study has defined the prevalence, symmetry and reliability of frontoethmoidal cells as established by the IFAC system as in other parts of the world.

Keywords: frontal sinus, frontoethmoidal cells, international frontal sinus anatomy classification

Procedia PDF Downloads 81
531 Children with Migration Backgrounds in Russian Elementary Schools: Teachers Attitudes and Practices

Authors: Chulpan Gromova, Rezeda Khairutdinova, Dina Birman

Abstract:

One of the most significant issues that schools all over the world face today is the ways teachers respond to increasing diversity. The study was informed by the tripartite model of multicultural competence, with awareness of personal biases a necessary component, together with knowledge of different cultures, and skills to work with students from diverse backgrounds. The paper presents the results of qualitative descriptive studies that help to understand how school teachers in Russia treat migrant children, how they solve the problems of adaptation of migrant children. The purpose of this study was to determine: a) educational practices used by primary school teachers when working with migrant children; b) relationship between practices and attitudes of teachers. Empirical data were collected through interviews. The participants were informed that a conversation was being recorded. They were also warned that the study was voluntary, absolutely anonymous, no personal data was disclosed. Consent was received from 20 teachers. The findings were analyzed using directive content analysis (Graneheim and Lundman, 2004). The analysis was deductive according to the categories of practices and attitudes identified in the literature review and enriched inductively to identify variation within these categories. Studying practices is an essential part of preparing future teachers for working in a multicultural classroom. For language and academic support, teachers mostly use individual work. In order to create a friendly classroom climate and environment teachers have productive conversations with students, organize multicultural events for the whole school or just for an individual class. The majority of teachers have positive attitudes toward migrant children. In most cases, positive attitudes lead to high expectations for their academic achievements. Conceptual orientation of teacher attitudes toward cultural diversity is mostly pluralistic. Positive attitudes, high academic expectations and conceptual orientation toward pluralism are favorably reflected in teachers’ practice.

Keywords: intercultural education, migrant children schooling, teachers attitudes, teaching practices

Procedia PDF Downloads 102
530 Lateralisation of Visual Function in Yellow-Eyed Mullet (Aldrichetta forsteri) and Its Role in Schooling Behaviour

Authors: Karen L. Middlemiss, Denham G. Cook, Peter Jaksons, Alistair Jerrett, William Davison

Abstract:

Lateralisation of cognitive function is a common phenomenon found throughout the animal kingdom. Strong biases in functional behaviours have evolved from asymmetrical brain hemispheres which differ in structure and/or cognitive function. In fish, lateralisation is involved in visually mediated behaviours such as schooling, predator avoidance, and foraging, and is considered to have a direct impact on species fitness. Currently, there is very little literature on the role of lateralisation in fish schools. The yellow-eyed mullet (Aldrichetta forsteri), is an estuarine and coastal species found commonly throughout temperate regions of Australia and New Zealand. This study sought to quantify visually mediated behaviours in yellow-eyed mullet to identify the significance of lateralisation, and the factors which influence functional behaviours in schooling fish. Our approach to study design was to conduct a series of tank based experiments investigating; a) individual and population level lateralisation, b) schooling behaviour, and d) optic lobe anatomy. Yellow-eyed mullet showed individual variation in direction and strength of lateralisation in juveniles, and trait specific spatial positioning within the school was evidenced in strongly lateralised fish. In combination with observed differences in schooling behaviour, the possibility of ontogenetic plasticity in both behavioural lateralisation and optic lobe morphology in adults is suggested. These findings highlight the need for research into the genetic and environmental factors (epigenetics) which drive functional behaviours such as schooling, feeding and aggression. Improved knowledge on collective behaviour could have significant benefits to captive rearing programmes through improved culture techniques and will add to the limited body of knowledge on the complex ecophysiological interactions present in our inshore fisheries.

Keywords: cerebral asymmetry, fisheries, schooling, visual bias

Procedia PDF Downloads 203
529 Germination and Bulb Formation of Allium tuncelianum L. under in vitro Condition

Authors: Suleyman Kizil, Tahsin Sogut, Khalid M. Khawar

Abstract:

Genus Allium includes 600 to 750 species and most of these including Allium tuncelianum (Kollman) N. Ozhatay, B. Mathew & Siraneci; Syn; A. macrochaetum Boiss. and Hausskn. subsp. tuncelianum Kollman] or Tunceli garlic is endemic to Eastern Turkish Province of Tunceli and Munzur mountains. They are edible, bear attractive white-to-purple flowers and fertile black seeds with deep seed dormancy. This study aimed to break seed dormancy of Tunceli garlic and determine the conditions for induction of bulblets on these seeds and increase their diameter by culturing them on MS medium supplemented different strengths of KNO3. Tunceli garlic seeds were collected from field grown plants. They were germinated on MS medium with or without 20 g/l sucrose followed by their culture on 1 × 1900 mg/l, 2 × 1900 mg/l, 4 ×1900 mg/l and 6 × 1900 mg/l mg/l KNO3 supplemented with 20 g/l sucrose to increase bulb diameter. Improved seeds germination was noted on MS medium with and without sucrose but with variation compared to previous reports. The bulb development percentage on each of the sprouted seeds was not parallel to the percentage of seed germination. The results showed 34% and 28.5% bulb induction was noted on germinated seeds after 150 and 158 days on MS medium containing 20 g l-1 sucrose and no sucrose respectively showing a delay of 8 days on the latter compared to the former. The results emphatically noted role of cold stratification on agar solidified MS medium supplemented with sucrose to improve seed germination. The best increase in bulb diameter was noted on MS medium containing 1 × 1900 mg/l KNO3 after 178 days with bulblet diameter and bulblet weight of 0.54 cm and 0.048 g, respectively. Consequently, the bulbs induced on sucrose containing MS medium could be transferred to pots earlier. Increased (>1 × 1900 mg/l KNO3) strengths of KNO3 induced negative effect on growth and development of Tunceli garlic bulbs. The strategy of seed germination and bulblet induction reported in this study could be positively used for conservation of this endemic plant species.

Keywords: Tunceli garlic, seed, dormancy, bulblets, bulb growth

Procedia PDF Downloads 260
528 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 339
527 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 61
526 Using Geographic Information System and Analytic Hierarchy Process for Detecting Forest Degradation in Benslimane Forest, Morocco

Authors: Loubna Khalile, Hicham Lahlaoi, Hassan Rhinane, A. Kaoukaya, S. Fal

Abstract:

Green spaces is an essential element, they contribute to improving the quality of lives of the towns around them. They are a place of relaxation, walk and rest a playground for sport and youths. According to United Nations Organization Forests cover 31% of the land. In Morocco in 2013 that cover 12.65 % of the total land area, still, a small proportion compared to the natural needs of forests as a green lung of our planet. The Benslimane Forest is a large green area It belongs to Chaouia-Ouardigha Region and Greater Casablanca Region, it is located geographically between Casablanca is considered the economic and business Capital of Morocco and Rabat the national political capital, with an area of 12261.80 Hectares. The essential problem usually encountered in suburban forests, is visitation and tourism pressure it is anthropogenic actions, as well as other ecological and environmental factors. In recent decades, Morocco has experienced a drought year that has influenced the forest with increasing human pressure and every day it suffers heavy losses, as well as over-exploitation. The Moroccan forest ecosystems are weak with intense ecological variation, domanial and imposed usage rights granted to the population; forests are experiencing a significant deterioration due to forgetfulness and immoderate use of forest resources which can influence the destruction of animal habitats, vegetation, water cycle and climate. The purpose of this study is to make a model of the degree of degradation of the forest and know the causes for prevention by using remote sensing and geographic information systems by introducing climate and ancillary data. Analytic hierarchy process was used to find out the degree of influence and the weight of each parameter, in this case, it is found that anthropogenic activities have a fairly significant impact has thus influenced the climate.

Keywords: analytic hierarchy process, degradation, forest, geographic information system

Procedia PDF Downloads 310
525 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations

Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta

Abstract:

The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).

Keywords: absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle

Procedia PDF Downloads 96
524 Barriers to Business Model Innovation in the Agri-Food Industry

Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad

Abstract:

The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.

Keywords: agri-food, barriers, business model, innovation

Procedia PDF Downloads 214
523 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 370
522 Numerical Modelling of Prestressed Geogrid Reinforced Soil System

Authors: Soukat Kumar Das

Abstract:

Rapid industrialization and increase in population has resulted in the scarcity of suitable ground conditions. It has driven the need of ground improvement by means of reinforcement with geosynthetics with the minimum possible settlement and with maximum possible safety. Prestressing the geosynthetics offers an economical yet safe method of gaining the goal. Commercially available software PLAXIS 3D has made the analysis of prestressed geosynthetics simpler with much practical simulations of the ground. Attempts have been made so far to analyse the effect of prestressing geosynthetics and the effect of interference of footing on Unreinforced (UR), Geogrid Reinforced (GR) and Prestressed Geogrid Reinforced (PGR) soil on the load bearing capacity and the settlement characteristics of prestressed geogrid reinforced soil using the numerical analysis by using the software PLAXIS 3D. The results of the numerical analysis have been validated and compared with those given in the referred paper. The results have been found to be in very good agreement with those of the actual field values with very small variation. The GR soil has been found to be improve the bearing pressure 240 % whereas the PGR soil improves it by almost 500 % for 1mm settlement. In fact, the PGR soil has enhanced the bearing pressure of the GR soil by almost 200 %. The settlement reduction has also been found to be very significant as for 100 kPa bearing pressure the settlement reduction of the PGR soil has been found to be about 88 % with respect to UR soil and it reduced to up to 67 % with respect to GR soil. The prestressing force has resulted in enhanced reinforcement mechanism, resulting in the increased bearing pressure. The deformation at the geogrid layer has been found to be 13.62 mm for GR soil whereas it decreased down to mere 3.5 mm for PGR soil which certainly ensures the effect of prestressing on the geogrid layer. The parameter Improvement factor or conventionally known as Bearing Capacity Ratio for different settlements and which depicts the improvement of the PGR with respect to UR and GR soil and the improvement of GR soil with respect to UR soil has been found to vary in the range of 1.66-2.40 in the present analysis for GR soil and was found to be vary between 3.58 and 5.12 for PGR soil with respect to UR soil. The effect of prestressing was also observed in case of two interfering square footings. The centre to centre distance between the two footings (SFD) was taken to be B, 1.5B, 2B, 2.5B and 3B where B is the width of the footing. It was found that for UR soil the improvement of the bearing pressure was up to 1.5B after which it remained almost same. But for GR soil the zone of influence rose up to 2B and for PGR it further went up to 2.5B. So the zone of interference for PGR soil has increased by 67% than Unreinforced (UR) soil and almost 25 % with respect to GR soil.

Keywords: bearing, geogrid, prestressed, reinforced

Procedia PDF Downloads 385
521 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 222
520 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow

Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam

Abstract:

Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.

Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety

Procedia PDF Downloads 281
519 Phenology and Size in the Social Sweat Bee, Halictus ligatus, in an Urban Environment

Authors: Rachel A. Brant, Grace E. Kenny, Paige A. Muñiz, Gerardo R. Camilo

Abstract:

The social sweat bee, Halictus ligatus, has been documented to alter its phenology as a response to changes in temporal dynamics of resources. Furthermore, H. ligatus exhibits polyethism in natural environments as a consequence of the variation in resources. Yet, we do not know if or how H. ligatus responds to these variations in urban environments. As urban environments become much more widespread, and human population is expected to reach nine billion by 2050, it is crucial to distinguish how resources are allocated by bees in cities. We hypothesize that in urban regions, where floral availability varies with human activity, H. ligatus will exhibit polyethism in order to match the extremely localized spatial variability of resources. We predict that in an urban setting, where resources vary both spatially and temporally, the phenology of H. ligatus will alter in response to these fluctuations. This study was conducted in Saint Louis, Missouri, at fifteen sites each varying in size and management type (community garden, urban farm, prairie restoration). Bees were collected by hand netting from 2013-2016. Results suggest that the largest individuals, mostly gynes, occurred in lower income neighborhood community gardens in May and August. We used a model averaging procedure, based on information theoretical methods, to determine a best model for predicting bee size. Our results suggest that month and locality within the city are the best predictors of bee size. Halictus ligatus was observed to comply with the predictions of polyethism from 2013 to 2015. However, in 2016 there was an almost complete absence of the smallest worker castes. This is a significant deviation from what is expected under polyethism. This could be attributed to shifts in planting decisions, shifts in plant-pollinator matches, or local climatic conditions. Further research is needed to determine if this divergence from polyethism is a new strategy for the social sweat bee as climate continues to alter or a response to human dominated landscapes.

Keywords: polyethism, urban environment, phenology, social sweat bee

Procedia PDF Downloads 207
518 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Spectral Unmixing Method and Assess the Extent and Severity of the Affected Area Using Neural Network Approach

Authors: Sunil Chandra, Triparna Barman, Vikas Gusain, Himanshu Rawat

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within the reserved forest, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differential burnt normalized ratio index (dNBR) approach that uses the burnt ratio values generated using Short Wave Infra Red (SWIR) band and Near Infra Red (NIR) bands of the Sentinel-2A image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel 2A bands. The training and testing data are generated from the sentinel-2A data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated in rugged terrain using spectral unmixing methods which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: dNBR, spectral unmixing, neural network, forest stratum

Procedia PDF Downloads 11