Search results for: deep neural models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6005

Search results for: deep neural models

3155 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database.

Keywords: 3D indexation, spherical harmonic, similarity of 3D objects, measurement similarity

Procedia PDF Downloads 426
3154 Construction of a Dynamic Model of Cerebral Blood Circulation for Future Integrated Control of Brain State

Authors: Tomohiko Utsuki

Abstract:

Currently, brain resuscitation becomes increasingly important due to revising various clinical guidelines pertinent to emergency care. In brain resuscitation, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) is required for stabilizing physiological state of brain, and is described as the essential treatment points in many guidelines of disorder and/or disease such as brain injury, stroke, and encephalopathy. Thus, an integrated control system of BT, ICP, and CBF will greatly contribute to alleviating the burden on medical staff and improving treatment effect in brain resuscitation. In order to develop such a control system, models related to BT, ICP, and CBF are required for control simulation, because trial and error experiments using patients are not ethically allowed. A static model of cerebral blood circulation from intracranial arteries and vertebral artery to jugular veins has already constructed and verified. However, it is impossible to represent the pooling of blood in blood vessels, which is one cause of cerebral hypertension in this model. And, it is also impossible to represent the pulsing motion of blood vessels caused by blood pressure change which can have an affect on the change of cerebral tissue pressure. Thus, a dynamic model of cerebral blood circulation is constructed in consideration of the elasticity of the blood vessel and the inertia of the blood vessel wall. The constructed dynamic model was numerically analyzed using the normal data, in which each arterial blood flow in cerebral blood circulation, the distribution of blood pressure in the Circle of Willis, and the change of blood pressure along blood flow were calculated for verifying against physiological knowledge. As the result, because each calculated numerical value falling within the generally known normal range, this model has no problem in representing at least the normal physiological state of the brain. It is the next task to verify the accuracy of the present model in the case of disease or disorder. Currently, the construction of a migration model of extracellular fluid and a model of heat transfer in cerebral tissue are in progress for making them parts of an integrated model of brain physiological state, which is necessary for developing an future integrated control system of BT, ICP and CBF. The present model is applicable to constructing the integrated model representing at least the normal condition of brain physiological state by uniting with such models.

Keywords: dynamic model, cerebral blood circulation, brain resuscitation, automatic control

Procedia PDF Downloads 147
3153 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 109
3152 Parallel Opportunity for Water Conservation and Habitat Formation on Regulated Streams through Formation of Thermal Stratification in River Pools

Authors: Todd H. Buxton, Yong G. Lai

Abstract:

Temperature management in regulated rivers can involve significant expenditures of water to meet the cold-water requirements of species in summer. For this purpose, flows released from Lewiston Dam on the Trinity River in Northern California are 12.7 cms with temperatures around 11oC in July through September to provide adult spring Chinook cold water to hold in deep pools and mature until spawning in fall. The releases are more than double the flow and 10oC colder temperatures than the natural conditions before the dam was built. The high, cold releases provide springers the habitat they require but may suppress the stream food base and limit future populations of salmon by reducing the juvenile fish size and survival to adults via the positive relationship between the two. Field and modeling research was undertaken to explore whether lowering summer releases from Lewiston Dam may promote thermal stratification in river pools so that both the cold-water needs of adult salmon and warmer water requirements of other organisms in the stream biome may be met. For this investigation, a three-dimensional (3D) computational fluid dynamics (CFD) model was developed and validated with field measurements in two deep pools on the Trinity River. Modeling and field observations were then used to identify the flows and temperatures that may form and maintain thermal stratification under different meteorologic conditions. Under low flows, a pool was found to be well mixed and thermally homogenous until temperatures began to stratify shortly after sunrise. Stratification then strengthened through the day until shading from trees and mountains cooled the inlet flow and decayed the thermal gradient, which collapsed shortly before sunset and returned the pool to a well-mixed state. This diurnal process of stratification formation and destruction was closely predicted by the 3D CFD model. Both the model and field observations indicate that thermal stratification maintained the coldest temperatures of the day at ≥2m depth in a pool and provided water that was around 8oC warmer in the upper 2m of the pool. Results further indicate that the stratified pool under low flows provided almost the same daily average temperatures as when flows were an order of magnitude higher and stratification was prevented, indicating significant water savings may be realized in regulated streams while also providing a diversity in water temperatures the ecosystem requires. With confidence in the 3D CFD model, the model is now being applied to a dozen pools in the Trinity River to understand how pool bathymetry influences thermal stratification under variable flows and diurnal temperature variations. This knowledge will be used to expand the results to 52 pools in a 64 km reach below Lewiston Dam that meet the depth criteria (≥2 m) for spring Chinook holding. From this, rating curves will be developed to relate discharge to the volume of pool habitat that provides springers the temperature (<15.6oC daily average), velocity (0.15 to 0.4 m/s) and depths that accommodate the escapement target for spring Chinook (6,000 adults) under maximum fish densities measured in other streams (3.1 m3/fish) during the holding time of year (May through August). Flow releases that meet these goals will be evaluated for water savings relative to the current flow regime and their influence on indicator species, including the Foothill Yellow-Legged Frog, and aspects of the stream biome that support salmon populations, including macroinvertebrate production and juvenile Chinook growth rates.

Keywords: 3D CFD modeling, flow regulation, thermal stratification, chinook salmon, foothill yellow-legged frogs, water managment

Procedia PDF Downloads 59
3151 Use of Fractal Geometry in Machine Learning

Authors: Fuad M. Alkoot

Abstract:

The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy.

Keywords: fractal geometry, machine learning, classifier, fractal dimension

Procedia PDF Downloads 206
3150 Evaluation of a Surrogate Based Method for Global Optimization

Authors: David Lindström

Abstract:

We evaluate the performance of a numerical method for global optimization of expensive functions. The method is using a response surface to guide the search for the global optimum. This metamodel could be based on radial basis functions, kriging, or a combination of different models. We discuss how to set the cycling parameters of the optimization method to get a balance between local and global search. We also discuss the eventual problem with Runge oscillations in the response surface.

Keywords: expensive function, infill sampling criterion, kriging, global optimization, response surface, Runge phenomenon

Procedia PDF Downloads 570
3149 Unusual High Origin and Superficial Course of Radial Artery: A Case Report with Embryological Explanation

Authors: Anasuya Ghosh, Subhramoy Chaudhury

Abstract:

During routine cadaveric dissection at gross anatomy lab of our institution, a radial artery was found with unusual origin and superficial course. Normally the radial artery takes its origin as one of the terminal branches of brachial artery at the level of the neck of radius. It usually lies along the lateral border of fore arm deep to the brachioradialis muscle. While dissecting a 72-year-old Caucasian female cadaver, it was found that the right sided radial artery originated from the upper part of brachial artery of arm, 2 cm below the lower border of teres major muscle, from the lateral aspect of brachial artery. Then the radial artery superficially crossed the brachial artery and median nerve from lateral to medial direction and rested superficially at the cubital fossa. Embryologically, it can be explained as a failure of disappearance, or abnormal persistence of some insignificant embryonic vessels may give rise to this kind of vascular anomalies. As radial artery is one of the most important upper limb arteries, its variation and related complications are clinically significant. This unusual origin and course of radial artery should be kept in mind by all healthcare providers including surgeons and radiologists during routine venipuncture, orthopedic and plastic surgeries of arm, coronary angiographic procedures in radial approach etc. to prevent unwanted complications.

Keywords: brachial artery anomalies, brachio-radial artery, high origin radial artery, superficial radial artery

Procedia PDF Downloads 322
3148 Performance Comparison of Outlier Detection Techniques Based Classification in Wireless Sensor Networks

Authors: Ayadi Aya, Ghorbel Oussama, M. Obeid Abdulfattah, Abid Mohamed

Abstract:

Nowadays, many wireless sensor networks have been distributed in the real world to collect valuable raw sensed data. The challenge is to extract high-level knowledge from this huge amount of data. However, the identification of outliers can lead to the discovery of useful and meaningful knowledge. In the field of wireless sensor networks, an outlier is defined as a measurement that deviates from the normal behavior of sensed data. Many detection techniques of outliers in WSNs have been extensively studied in the past decade and have focused on classic based algorithms. These techniques identify outlier in the real transaction dataset. This survey aims at providing a structured and comprehensive overview of the existing researches on classification based outlier detection techniques as applicable to WSNs. Thus, we have identified key hypotheses, which are used by these approaches to differentiate between normal and outlier behavior. In addition, this paper tries to provide an easier and a succinct understanding of the classification based techniques. Furthermore, we identified the advantages and disadvantages of different classification based techniques and we presented a comparative guide with useful paradigms for promoting outliers detection research in various WSN applications and suggested further opportunities for future research.

Keywords: bayesian networks, classification-based approaches, KPCA, neural networks, one-class SVM, outlier detection, wireless sensor networks

Procedia PDF Downloads 489
3147 Roasting Degree of Cocoa Beans by Artificial Neural Network (ANN) Based Electronic Nose System and Gas Chromatography (GC)

Authors: Juzhong Tan, William Kerr

Abstract:

Roasting is one critical procedure in chocolate processing, where special favors are developed, moisture content is decreased, and better processing properties are developed. Therefore, determination of roasting degree of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products, and it also decides the commercial value of cocoa beans collected from cocoa farmers. The roasting degree of cocoa beans currently relies on human specialists, who sometimes are biased, and chemical analysis, which take long time and are inaccessible to many manufacturers and farmers. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was used to detecting the gas generated by cocoa beans with a different roasting degree (0min, 20min, 30min, and 40min) and the signals collected by gas sensors were used to train a three-layers ANN. Chemical analysis of the graded beans was operated by traditional GC-MS system and the contents of volatile chemical compounds were used to train another ANN as a reference to electronic nosed signals trained ANN. Both trained ANN were used to predict cocoa beans with a different roasting degree for validation. The best accuracy of grading achieved by electronic nose signals trained ANN (using signals from TGS 813 826 820 880 830 2620 2602 2610) turned out to be 96.7%, however, the GC trained ANN got the accuracy of 83.8%.

Keywords: artificial neutron network, cocoa bean, electronic nose, roasting

Procedia PDF Downloads 230
3146 Turbulent Channel Flow Synthesis using Generative Adversarial Networks

Authors: John M. Lyne, K. Andrea Scott

Abstract:

In fluid dynamics, direct numerical simulations (DNS) of turbulent flows require large amounts of nodes to appropriately resolve all scales of energy transfer. Due to the size of these databases, sharing these datasets amongst the academic community is a challenge. Recent work has been done to investigate the use of super-resolution to enable database sharing, where a low-resolution flow field is super-resolved to high resolutions using a neural network. Recently, Generative Adversarial Networks (GAN) have grown in popularity with impressive results in the generation of faces, landscapes, and more. This work investigates the generation of unique high-resolution channel flow velocity fields from a low-dimensional latent space using a GAN. The training objective of the GAN is to generate samples in which the distribution of the generated samplesis ideally indistinguishable from the distribution of the training data. In this study, the network is trained using samples drawn from a statistically stationary channel flow at a Reynolds number of 560. Results show that the turbulent statistics and energy spectra of the generated flow fields are within reasonable agreement with those of the DNS data, demonstrating that GANscan produce the intricate multi-scale phenomena of turbulence.

Keywords: computational fluid dynamics, channel flow, turbulence, generative adversarial network

Procedia PDF Downloads 199
3145 SIPINA Induction Graph Method for Seismic Risk Prediction

Authors: B. Selma

Abstract:

The aim of this study is to test the feasibility of SIPINA method to predict the harmfulness parameters controlling the seismic response. The approach developed takes into consideration both the focal depth and the peak ground acceleration. The parameter to determine is displacement. The data used for the learning of this method and analysis nonlinear seismic are described and applied to a class of models damaged to some typical structures of the existing urban infrastructure of Jassy, Romania. The results obtained indicate an influence of the focal depth and the peak ground acceleration on the displacement.

Keywords: SIPINA algorithm, seism, focal depth, peak ground acceleration, displacement

Procedia PDF Downloads 306
3144 Development of Catalyst from Waste Egg Shell for Biodiesel Production by Using Waste Vegetable Oil

Authors: Victor Chinecherem Ejeke, Raphael Eze Nnam

Abstract:

The main objective of this research is to produce biodiesel from waste vegetable oil using activated eggshell waste as solid catalysts. A transesterification reaction was performed for the conversion to biodiesel. Waste eggshells were calcined at 700°C, 800°C and 900°C for a time period of 3hrs for the preparation of the renewable catalyst. The calcined waste eggshell catalyst was characterized using X-Ray Florescence (XRF) Spectroscopy, which revealed CaO as the major constituent (90.86%); this was further confirmed by X-Ray Diffraction (XRD) and Fourier Transform Infrared (FTIR) analyses. The prepared catalyst was used for transesterification reaction and the effects of calcination temperature (700 to 900°C), Deep Eutectic Solvent DES loading (3 to 18 wt. %), Waste Egg Shell (WES) catalyst loading (6 to 14 wt. %) on the conversion to biodiesel were studied. The yield of biodiesel using a waste eggshell catalyst (91%) is comparable to conventional catalyst like sodium hydroxide with a yield of 80-90%. The maximum biodiesel production yield was obtained at a specific oil-to methanol molar ratio of 1:10, a temperature of 65°C and a catalyst loading of 14g-wt%. The biodiesel produced was characterized as being composed of methyl Tetradecanoate (C₁₄H₂₈O₂) 30.92% using the Gas Chromatographic (GC-MS) analysis. The fuel properties of the biodiesel (Flashpoint 138ᵒC) were comparable to commercial diesel, and hence it can be used in compression-ignition engines. The results indicated that the catalysts derived from waste eggshell had high potential to be used as biodiesel production catalysts in transesterification of waste vegetable oil with the advantage of reusability and also not requiring water washing steps.

Keywords: waste vegetable oil, catalyst , biodiesel , waste egg shell

Procedia PDF Downloads 200
3143 Finite Sample Inferences for Weak Instrument Models

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. Finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: bootstrap, Instrumental Variable, Edgeworth expansions, Saddlepoint expansions

Procedia PDF Downloads 303
3142 Parameter Estimation in Dynamical Systems Based on Latent Variables

Authors: Arcady Ponosov

Abstract:

A novel mathematical approach is suggested, which facilitates a compressed representation and efficient validation of parameter-rich ordinary differential equation models describing the dynamics of complex, especially biology-related, systems and which is based on identification of the system's latent variables. In particular, an efficient parameter estimation method for the compressed non-linear dynamical systems is developed. The method is applied to the so-called 'power-law systems' being non-linear differential equations typically used in Biochemical System Theory.

Keywords: generalized law of mass action, metamodels, principal components, synergetic systems

Procedia PDF Downloads 349
3141 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 501
3140 Generativism in Language Design and Their Effects on String of Constructions

Authors: Christian Uchechukwu Gilbert

Abstract:

Generativism in language design investigates the framework on which varying sentence structures are built in the English language. Propounded by Noam Chomsky in 1965, the theory transforms sentences from an active structure to a passive one by the application of established rules of the theory. Resident in the body of syntax, the rules include movement, insertion, substitution, and deletion rules. Using the movement rule, the analysis is armed with the qualitative research method, on which the works of scholars were duly consulted for more insight and in line with the academic practice in research activities. The investigation showed that the rules of competent grammar explain the formulation of sentences in a language and how transformation takes place among sentences from a deep structure to a surface structure with accurate results. The structural differences that could be got through dative movement and the deletion of the preposition; passivisation got from an active sentence by the insertion of the preposition “by” a “be verb” and the aspect tense marker “–en”, held as the creative aspect of language vocabulary and the subject-auxiliary inversion that exchanges the auxiliary of a sentence with the subject of the same sentence thereby transforming a kennel sentence to a polar question, viewed as an external argument under θ-theory. Generativism in language design, therefore, changes available types of sentences and relates one form of linguistic category with others in language design.

Keywords: language, generate, transformation, structure, design

Procedia PDF Downloads 48
3139 Physico-Chemical Quality Study of Geothermal Waters of the Region DjéRid-Tunisia

Authors: Anis Eloud, Mohamed Ben Amor

Abstract:

Tunisia is a semi-arid country on ¾ of its territory. It is characterized by the scarcity of water resources and accentuated by climate variability. The potential water resources are estimated at 4.6 million m3 / year, of which 2.7 million m3 / year represent surface water and 1.9 million m3 / year feed all the layers that make up the renewable groundwater resources. Water available in Tunisia easily exceed health or agricultural salinity standards. Barely 50% of water resources are less than 1.5 g / l divided at 72% of surface water salinity, 20% of deep groundwater and only 8% in groundwater levels. Southern Tunisia has the largest web "of water in the country, these waters are characterized by a relatively high salinity may exceed 4 gl-1. This is the "root of many problems encountered during their operation. In the region of Djérid, Albian wells are numerous. These wells debit a geothermal water with an average flow of 390 L / s. This water is characterized by a relatively high salinity and temperature of which is around 65 ° C at the source. Which promotes the formation of limescale deposits within the water supply pipe and the cooling loss thereby increasing the load in direct relation with enormous expense and circuits to replace these lines when completely plugged. The present work is a study of geothermal water quality of the region Djérid from physico-chemical analyzes.

Keywords: water quality, salinity, geothermal, supply pipe

Procedia PDF Downloads 518
3138 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization

Procedia PDF Downloads 361
3137 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt

Authors: Basma Yassa

Abstract:

Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.

Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design

Procedia PDF Downloads 39
3136 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress

Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin

Abstract:

Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.

Keywords: acceptance, coping strategies, stress, validation process

Procedia PDF Downloads 333
3135 Business Logic and Environmental Policy, a Research Agenda for the Business-to-Citizen Business Model

Authors: Mats Nilsson

Abstract:

The European electricity markets have been changing from a regulated market, to in some places a deregulated market, and are now experiencing a strong influence of renewable support systems. Firm’s that rely on subsidies have a different business logic than firms acting in a market context. The article proposes that an offspring to the regular business models, the business-to-citizen, should be used. The case of the European electricity market frames the concept of a business-citizen business model, and a research agenda for this concept is outlined.

Keywords: business logic, business model, subsidies, business-to-citizen

Procedia PDF Downloads 457
3134 Optimization of Machining Parameters of Wire Electric Discharge Machining (WEDM) of Inconel 625 Super Alloy

Authors: Amitesh Goswami, Vishal Gulati, Annu Yadav

Abstract:

In this paper, WEDM has been used to investigate the machining characteristics of Inconel-625 alloy. The machining characteristics namely material removal rate (MRR) and surface roughness (SR) have been investigated along with surface microstructure analysis using SEM and EDS of the machined surface. Taguchi’s L27 Orthogonal array design has been used by considering six varying input parameters viz. Pulse-on time (Ton), Pulse-off time (Toff), Spark Gap Set Voltage (SV), Peak Current (IP), Wire Feed (WF) and Wire Tension (WT) for the responses of interest. It has been found out that Pulse-on time (Ton) and Spark Gap Set Voltage (SV) are the most significant parameters affecting material removal rate (MRR) and surface roughness (SR) are. Microstructure analysis of workpiece was also done using Scanning Electron Microscope (SEM). It was observed that, variations in pulse-on time and pulse-off time causes varying discharge energy and as a result of which deep craters / micro cracks and large/ small number of debris were formed. These results were helpful in studying the effects of pulse-on time and pulse-off time on MRR and SR. Energy Dispersive Spectrometry (EDS) was also done to check the compositional analysis of the material and it was observed that Copper and Zinc which were initially not present in the Inconel 625, later migrated on the material surface from the brass wire electrode during machining

Keywords: MRR, SEM, SR, taguchi, Wire Electric Discharge Machining

Procedia PDF Downloads 348
3133 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments

Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard

Abstract:

With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.

Keywords: activities of daily living, classification, internet of things, machine learning, prediction, smart home

Procedia PDF Downloads 346
3132 Seismic Hazard Analysis for a Multi Layer Fault System: Antalya (SW Turkey) Example

Authors: Nihat Dipova, Bulent Cangir

Abstract:

This article presents the results of probabilistic seismic hazard analysis (PSHA) for Antalya (SW Turkey). South west of Turkey is characterized by large earthquakes resulting from the continental collision between the African, Arabian and Eurasian plates and crustal faults. Earthquakes around the study area are grouped into two; crustal earthquakes (D=0-50 km) and subduction zone earthquakes (50-140 km). Maximum observed magnitude of subduction earthquakes is Mw=6.0. Maximum magnitude of crustal earthquakes is Mw=6.6. Sources for crustal earthquakes are faults which are related with Isparta Angle and Cyprus Arc tectonic structures. A new earthquake catalogue for Antalya, with unified moment magnitude scale has been prepared and seismicity of the area around Antalya city has been evaluated by defining ‘a’ and ‘b’ parameters of the Gutenberg-Richter recurrence relationship. The Standard Cornell-McGuire method has been used for hazard computation utilizing CRISIS2007 software. Attenuation relationships proposed by Chiou and Youngs (2008) has been used for 0-50 km earthquakes and Youngs et. al (1997) for deep subduction earthquakes. Finally, Seismic hazard map for peak horizontal acceleration on a uniform site condition of firm rock (average shear wave velocity of about 1130 m/s) at a hazard level of 10% probability of exceedance in 50 years has been prepared.

Keywords: Antalya, peak ground acceleration, seismic hazard assessment, subduction

Procedia PDF Downloads 367
3131 Luxury in Fashion: Visual Analysis on Bag Advertising

Authors: Lama Ajinah

Abstract:

Luxury brands witnessed continuous growth which followed women’s desire towards individual distinctiveness and social glare. Bags are a woman’s best friend either for aesthetic or functional purposes when she leaves her home for leisure or work. One way of women constant aspiration for being distinguished while reflecting their wealth is through handbags. Subsequently, the demand and attraction by consumers towards the dazzle of luxurious brands for personal pleasure and social status have flourished. According to the literature review, a visual analysis on luxury brands has been explored yet a focus on bags was not discussed in details. Hence, a deep analysis will be dedicated on the two segments by showcasing examples of high-end bag advertising. The research is conducted to understand advertising strategies used in promoting for luxurious products. Furthermore, the paper explores the definition of the term luxury, the condition in which it is used in, and the visual language used along with the term. As luxury is an indicator of superior satisfaction, it is obtained on two levels: a personal and a social level. The examples of luxury brand ads are selected from the last five years to uncover the latest, most common strategies used to promote for luxurious brands. The methods employed in this paper consist of literature review, semiotic analysis, and content analysis. The researcher concludes with revealing the methods used in advertising while categorizing them into various themes.

Keywords: advertising, brands, fashion, graphic design, luxury, semiotic analysis, semiology, visual analysis, visual communication

Procedia PDF Downloads 239
3130 Optimal Wheat Straw to Bioethanol Supply Chain Models

Authors: Abdul Halim Abdul Razik, Ali Elkamel, Leonardo Simon

Abstract:

Wheat straw is one of the alternative feedstocks that may be utilized for bioethanol production especially when sustainability criteria are the major concerns. To increase market competitiveness, optimal supply chain plays an important role since wheat straw is a seasonal agricultural residue. In designing the supply chain optimization model, economic profitability of the thermochemical and biochemical conversion routes options were considered. It was found that torrefied pelletization with gasification route to be the most profitable option to produce bioethanol from the lignocellulosic source of wheat straw.

Keywords: bio-ethanol, optimization, supply chain, wheat straw

Procedia PDF Downloads 729
3129 The Impact of Artificial Intelligence on Digital Factory

Authors: Mona Awad Wanis Gad

Abstract:

The method of factory making plans has changed loads, in particular, whilst it's miles approximately making plans the factory building itself. Factory making plans have the venture of designing merchandise, plants, tactics, organization, regions, and the construction of a factory. Ordinary restructuring is turning into greater essential for you to preserve the competitiveness of a manufacturing unit. Regulations in new regions, shorter lifestyle cycles of product and manufacturing era, in addition to a VUCA global (Volatility, Uncertainty, Complexity and Ambiguity) cause extra common restructuring measures inside a factory. A digital factory model is the planning foundation for rebuilding measures and turns into a critical device. Furthermore, digital building fashions are increasingly being utilized in factories to help facility management and manufacturing processes. First, exclusive styles of digital manufacturing unit fashions are investigated, and their residences and usabilities to be used instances are analyzed. Within the scope of research are point cloud fashions, building statistics fashions, photogrammetry fashions, and those enriched with sensor information are tested. It investigated which digital fashions permit a simple integration of sensor facts and in which the variations are. In the end, viable application areas of virtual manufacturing unit models are determined by a survey, and the respective digital manufacturing facility fashions are assigned to the application areas. Ultimately, an application case from upkeep is selected and implemented with the assistance of the best virtual factory version. It is shown how a completely digitalized preservation process can be supported by a digital manufacturing facility version by offering facts. Among different functions, the virtual manufacturing facility version is used for indoor navigation, facts provision, and display of sensor statistics. In summary, the paper suggests a structuring of virtual factory fashions that concentrates on the geometric representation of a manufacturing facility building and its technical facilities. A practical application case is proven and implemented. For that reason, the systematic selection of virtual manufacturing facility models with the corresponding utility cases is evaluated.

Keywords: augmented reality, digital factory model, factory planning, restructuring digital factory model, photogrammetry, factory planning, restructuring building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 14
3128 Comparison of Different Techniques to Estimate Surface Soil Moisture

Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini

Abstract:

Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.

Keywords: artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil

Procedia PDF Downloads 356
3127 Good Death as Perceived by the Critically Ill Patients' Family Member

Authors: Wanlapa Kunsongkeit

Abstract:

When a person gets sick, he or she goes to hospital for the treatment. In the case of severe illness, there might be no hope for some patients to recover. In this state, the patients will face anxiety and fear. These feelings make the patients suffer in mind until the time of death or called bad death. These feeling also directly effect to family members who are loved ones and significant persons of the patients. They can help the dying patients to have good death. From literature reviews, many studies focused on good death in patients and nurses. Little is known about good death in family member. Therefore, the qualitative research based on Heideggerian phenomenology aimed to describe good death as perceived by the critically ill patients’ family members. Five informants who were the critically ill patients’ family members at hospital in Chonburi were purposively selected. Data were collected by in-depth interview, observation and critical reflection during January, 2014 to March, 2014 . Cohen, Kahn and Steeves’s (2000) steps guided data analysis. Trustworthiness was maintained throughout the study following Lincoln and Guba’s guidelines. Four themes were emerged, which were no suffering, acceptance of imminent death, preparing for death, and being with the family. This findings provide deep understanding of good death as perceived by the critically ill patients’ family members. It can be basic information for nurses to provide good death nursing care and further explore for development of knowledge regarding good death nursing care.

Keywords: good death, family member, critically ill patient, phenomenology

Procedia PDF Downloads 432
3126 A Heteroskedasticity Robust Test for Contemporaneous Correlation in Dynamic Panel Data Models

Authors: Andreea Halunga, Chris D. Orme, Takashi Yamagata

Abstract:

This paper proposes a heteroskedasticity-robust Breusch-Pagan test of the null hypothesis of zero cross-section (or contemporaneous) correlation in linear panel-data models, without necessarily assuming independence of the cross-sections. The procedure allows for either fixed, strictly exogenous and/or lagged dependent regressor variables, as well as quite general forms of both non-normality and heteroskedasticity in the error distribution. The asymptotic validity of the test procedure is predicated on the number of time series observations, T, being large relative to the number of cross-section units, N, in that: (i) either N is fixed as T→∞; or, (ii) N²/T→0, as both T and N diverge, jointly, to infinity. Given this, it is not expected that asymptotic theory would provide an adequate guide to finite sample performance when T/N is "small". Because of this, we also propose and establish asymptotic validity of, a number of wild bootstrap schemes designed to provide improved inference when T/N is small. Across a variety of experimental designs, a Monte Carlo study suggests that the predictions from asymptotic theory do, in fact, provide a good guide to the finite sample behaviour of the test when T is large relative to N. However, when T and N are of similar orders of magnitude, discrepancies between the nominal and empirical significance levels occur as predicted by the first-order asymptotic analysis. On the other hand, for all the experimental designs, the proposed wild bootstrap approximations do improve agreement between nominal and empirical significance levels, when T/N is small, with a recursive-design wild bootstrap scheme performing best, in general, and providing quite close agreement between the nominal and empirical significance levels of the test even when T and N are of similar size. Moreover, in comparison with the wild bootstrap "version" of the original Breusch-Pagan test our experiments indicate that the corresponding version of the heteroskedasticity-robust Breusch-Pagan test appears reliable. As an illustration, the proposed tests are applied to a dynamic growth model for a panel of 20 OECD countries.

Keywords: cross-section correlation, time-series heteroskedasticity, dynamic panel data, heteroskedasticity robust Breusch-Pagan test

Procedia PDF Downloads 422