Search results for: strain-based damage model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18222

Search results for: strain-based damage model

9492 Spectroscopic Relation between Open Cluster and Globular Cluster

Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma

Abstract:

The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.

Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model

Procedia PDF Downloads 207
9491 Early-Onset Asthma and Early Smoking Increase Risk of Bipolar Disorder in Adolescents and Young Adults

Authors: Meng-Huan Wu, Wei-Er Wang, Tsu-Nai Wang, Wei-Jian Hsu, Vincent Chin-Hung Chen

Abstract:

Objective: Studies have reported a strong link between asthma and bipolar disorder. We conducted a 17-year community-based large cohort study to examine the relationship between asthma, early smoking initiation, and bipolar disorder during adolescence and early adulthood. Methods: A total of 162,766 participants aged 11–16 years were categorized into asthma and non-asthma groups at baseline and compared within the observation period. Covariates during late childhood or adolescence included parental education, cigarette smoking by family members of participants, and participant’s gender, age, alcohol consumption, smoking, and exercise habits. Data for urbanicity, prednisone use, allergic comorbidity, and Charlson comorbidity index were acquired from the National Health Insurance Research Database. The Cox proportional-hazards model was used to evaluate the association between asthma and bipolar disorder. Results: Our findings revealed that asthma increased the risk of bipolar disorder after adjustment for key confounders in the Cox proportional hazard regression model (adjusted HR: 1.31, 95% CI: 1.12-1.53). Hospitalizations or visits to the emergency department for asthma exhibited a dose–response effect on bipolar disorder (adjusted HR: 1.59, 95% CI: 1.22-2.06). Patients with asthma with onset before 20 years of age who smoked during late childhood or adolescence had the greatest risk for bipolar disorder (adjusted HR: 3.10, 95% CI: 1.29-7.44). Conclusions: Patients newly diagnosed with asthma had a 1.3 times higher risk of developing bipolar disorder. Smoking during late childhood or adolescence increases the risk of developing bipolar disorder in patients with asthma.

Keywords: adolescence, asthma, smoking, bipolar disorder, early adulthood

Procedia PDF Downloads 316
9490 Parallel Opportunity for Water Conservation and Habitat Formation on Regulated Streams through Formation of Thermal Stratification in River Pools

Authors: Todd H. Buxton, Yong G. Lai

Abstract:

Temperature management in regulated rivers can involve significant expenditures of water to meet the cold-water requirements of species in summer. For this purpose, flows released from Lewiston Dam on the Trinity River in Northern California are 12.7 cms with temperatures around 11oC in July through September to provide adult spring Chinook cold water to hold in deep pools and mature until spawning in fall. The releases are more than double the flow and 10oC colder temperatures than the natural conditions before the dam was built. The high, cold releases provide springers the habitat they require but may suppress the stream food base and limit future populations of salmon by reducing the juvenile fish size and survival to adults via the positive relationship between the two. Field and modeling research was undertaken to explore whether lowering summer releases from Lewiston Dam may promote thermal stratification in river pools so that both the cold-water needs of adult salmon and warmer water requirements of other organisms in the stream biome may be met. For this investigation, a three-dimensional (3D) computational fluid dynamics (CFD) model was developed and validated with field measurements in two deep pools on the Trinity River. Modeling and field observations were then used to identify the flows and temperatures that may form and maintain thermal stratification under different meteorologic conditions. Under low flows, a pool was found to be well mixed and thermally homogenous until temperatures began to stratify shortly after sunrise. Stratification then strengthened through the day until shading from trees and mountains cooled the inlet flow and decayed the thermal gradient, which collapsed shortly before sunset and returned the pool to a well-mixed state. This diurnal process of stratification formation and destruction was closely predicted by the 3D CFD model. Both the model and field observations indicate that thermal stratification maintained the coldest temperatures of the day at ≥2m depth in a pool and provided water that was around 8oC warmer in the upper 2m of the pool. Results further indicate that the stratified pool under low flows provided almost the same daily average temperatures as when flows were an order of magnitude higher and stratification was prevented, indicating significant water savings may be realized in regulated streams while also providing a diversity in water temperatures the ecosystem requires. With confidence in the 3D CFD model, the model is now being applied to a dozen pools in the Trinity River to understand how pool bathymetry influences thermal stratification under variable flows and diurnal temperature variations. This knowledge will be used to expand the results to 52 pools in a 64 km reach below Lewiston Dam that meet the depth criteria (≥2 m) for spring Chinook holding. From this, rating curves will be developed to relate discharge to the volume of pool habitat that provides springers the temperature (<15.6oC daily average), velocity (0.15 to 0.4 m/s) and depths that accommodate the escapement target for spring Chinook (6,000 adults) under maximum fish densities measured in other streams (3.1 m3/fish) during the holding time of year (May through August). Flow releases that meet these goals will be evaluated for water savings relative to the current flow regime and their influence on indicator species, including the Foothill Yellow-Legged Frog, and aspects of the stream biome that support salmon populations, including macroinvertebrate production and juvenile Chinook growth rates.

Keywords: 3D CFD modeling, flow regulation, thermal stratification, chinook salmon, foothill yellow-legged frogs, water managment

Procedia PDF Downloads 45
9489 How Envisioning Process Is Constructed: An Exploratory Research Comparing Three International Public Televisions

Authors: Alexandre Bedard, Johane Brunet, Wendellyn Reid

Abstract:

Public Television is constantly trying to maintain and develop its audience. And to achieve those goals, it needs a strong and clear vision. Vision or envision is a multidimensional process; it is simultaneously a conduit that orients and fixes the future, an idea that comes before the strategy and a mean by which action is accomplished, from a business perspective. Also, vision is often studied from a prescriptive and instrumental manner. Based on our understanding of the literature, we were able to explain how envisioning, as a process, is a creative one; it takes place in the mind and uses wisdom and intelligence through a process of evaluation, analysis and creation. Through an aggregation of the literature, we build a model of the envisioning process, based on past experiences, perceptions and knowledge and influenced by the context, being the individual, the organization and the environment. With exploratory research in which vision was deciphered through the discourse, through a qualitative and abductive approach and a grounded theory perspective, we explored three extreme cases, with eighteen interviews with experts, leaders, politicians, actors of the industry, etc. and more than twenty hours of interviews in three different countries. We compared the strategy, the business model, and the political and legal forces. We also looked at the history of each industry from an inertial point of view. Our analysis of the data revealed that a legitimacy effect due to the audience, the innovation and the creativity of the institutions was at the cornerstone of what would influence the envisioning process. This allowed us to identify how different the process was for Canadian, French and UK public broadcasters, although we concluded that the three of them had a socially constructed vision for their future, based on stakeholder management and an emerging role for the managers: ideas brokers.

Keywords: envisioning process, international comparison, television, vision

Procedia PDF Downloads 114
9488 World Agricultural Commodities Prices Dynamics and Volatilities Impacts on Commodities Importation and Food Security in West African Economic and Monetary Union Countries

Authors: Baoubadi Atozou, Koffi Akakpo

Abstract:

Since the decade 2000, the use of foodstuffs such as corn, wheat, and soybeans in biofuel production has been growing sharply in the United States, Canada, and Europe. Thus, prices for these agricultural products are rising in the world market. These cereals are the most important source of calorific energy for West African Economic and Monetary Union (WAEMU) countries members’ population. These countries are highly dependent on imports of most of these products. Thereby, rising prices can have an important impact on import levels and consequently on food security in these countries. This study aims to analyze the interrelationship between the prices of these commodities and their volatilities, and their effects on imports of these agricultural products by each WAEMU ’country member. The Autoregressive Distributed Lag (ARDL) model, the GARCH Multivariate model, and the Granger Causality Test are used in this investigation. The results show that import levels are highly and significantly sensitive to price changes as well as their volatility. In the short term as well as in the long term, there is a significant relationship between the prices of these products. There is a positive relationship in general between price volatility. And these volatilities have negative effects on the level of imports. The market characteristics affect food security in these countries, especially access to food for vulnerable and low-income populations. The policies makers must adopt viable strategies to increase agricultural production and limit their dependence on imports.

Keywords: price volatility, import of agricultural products, food safety, WAEMU

Procedia PDF Downloads 172
9487 Study on the Spatial Evolution Characteristics of Urban Agglomeration Integration in China: The Case of Chengdu-Chongqing Urban Agglomeration

Authors: Guoqin Ge, Minhui Huang, Yazhou Zhou

Abstract:

The growth of the Chengdu-Chongqing urban agglomeration has been designated as a national strategy in China. Analyzing its spatial evolution characteristics is crucial for devising relevant development strategies. This paper enhances the gravitational model by using temporal distance as a factor. It applies this improved model to assess the economic interconnection and concentration level of each geographical unit within the Chengdu-Chongqing urban agglomeration between 2011 and 2019. On this basis, this paper examines the spatial correlation characteristics of economic agglomeration intensity and urban-rural development equalization by employing spatial autocorrelation analysis. The study findings indicate that the spatial integration in the Chengdu-Chongqing urban agglomeration is currently in the "point-axis" development stage. The spatial organization structure is becoming more flattened, and there is a stronger economic connection between the core of the urban agglomeration and the peripheral areas. The integration of the Chengdu-Chongqing urban agglomeration is currently hindered by conflicting interests and institutional heterogeneity between Chengdu and Chongqing. Additionally, the connections between the relatively secondary spatial units are largely loose and weak. The strength and scale of economic ties and the level of urban-rural equilibrium among spatial units within the Chengdu-Chongqing urban agglomeration have increased, but regional imbalances have continued to widen, and such positive and negative changes have been characterized by the spatial and temporal synergistic evolution of the "core-periphery". Ultimately, this paper presents planning ideas for the future integration development of the Chengdu-Chongqing urban agglomeration, drawing from the findings.

Keywords: integration, planning strategy, space organization, space evolution, urban agglomeration

Procedia PDF Downloads 36
9486 Electronic Device Robustness against Electrostatic Discharges

Authors: Clara Oliver, Oibar Martinez

Abstract:

This paper is intended to reveal the severity of electrostatic discharge (ESD) effects in electronic and optoelectronic devices by performing sensitivity tests based on Human Body Model (HBM) standard. We explain here the HBM standard in detail together with the typical failure modes associated with electrostatic discharges. In addition, a prototype of electrostatic charge generator has been designed, fabricated, and verified to stress electronic devices, which features a compact high voltage source. This prototype is inexpensive and enables one to do a battery of pre-compliance tests aimed at detecting unexpected weaknesses to static discharges at the component level. Some tests with different devices were performed to illustrate the behavior of the proposed generator. A set of discharges was applied according to the HBM standard to commercially available bipolar transistors, complementary metal-oxide-semiconductor transistors and light emitting diodes. It is observed that high current and voltage ratings in electronic devices not necessarily provide a guarantee that the device will withstand high levels of electrostatic discharges. We have also compared the result obtained by performing the sensitivity tests based on HBM with a real discharge generated by a human. For this purpose, the charge accumulated in the person is monitored, and a direct discharge against the devices is generated by touching them. Every test has been performed under controlled relative humidity conditions. It is believed that this paper can be of interest for research teams involved in the development of electronic and optoelectronic devices which need to verify the reliability of their devices in terms of robustness to electrostatic discharges.

Keywords: human body model, electrostatic discharge, sensitivity tests, static charge monitoring

Procedia PDF Downloads 129
9485 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application

Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal

Abstract:

This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.

Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism

Procedia PDF Downloads 113
9484 Evaluation of the Efficacy and Tolerance of Gabapentin in the Treatment of Neuropathic Pain

Authors: A. Ibovi Mouondayi, S. Zaher, R. Assadi, K. Erraoui, S. Sboul, J. Daoudim, S. Bousselham, K. Nassar, S. Janani

Abstract:

INTRODUCTION: Neuropathic pain (NP) caused by damage to the somatosensory nervous system has a significant impact on quality of life and is associated with a high economic burden on the individual and society. The treatment of neuropathic pain consists of the use of a wide range of therapeutic agents, including gabapentin, which is used in the treatment of neuropathic pain. OBJECTIF: The objective of this study was to evaluate the efficacy and tolerance of gabapentin in the treatment of neuropathic pain. MATERIAL AND METHOD: This is a monocentric, cross-sectional, descriptive, retrospective study conducted in our department over a period of 19 months from October 2020 to April 2022. The missing parameters were collected during phone calls of the patients concerned. The diagnostic tool adopted was the DN4 questionnaire in the dialectal Arabic version. The impact of NP was assessed by the visual analog scale (VAS) on pain, sleep, and function. The impact of PN on mood was assessed by the "Hospital anxiety, and depression scale HAD" score in the validated Arabic version. The exclusion criteria were patients followed up for depression and other psychiatric pathologies. RESULTS: A total of 67 patients' data were collected. The average age was 64 years (+/- 15 years), with extremes ranging from 26 years to 94 years. 58 women and 9 men with an M/F sex ratio of 0.15. Cervical radiculopathy was found in 21% of this population, and lumbosacral radiculopathy in 61%. Gabapentin was introduced in doses ranging from 300 to 1800 mg per day with an average dose of 864 mg (+/- 346) per day for an average duration of 12.6 months. Before treatment, 93% of patients had a non-restorative sleep quality (VAS>3). 54% of patients had a pain VAS greater than 5. The function was normal in only 9% of patients. The mean anxiety score was 3.25 (standard deviation: 2.70), and the mean HAD depression score was 3.79 (standard deviation: 1.79). After treatment, all patients had improved the quality of their sleep (p<0.0001). A significant difference was noted in pain VAS, function, as well as anxiety and depression, and HAD score. Gabapentin was stopped for side effects (dizziness and drowsiness) and/or unsatisfactory response. CONCLUSION: Our data demonstrate a favorable effect of gabapentin on the management of neuropathic pain with a significant difference before and after treatment on the quality of life of patients associated with an acceptable tolerance profile.

Keywords: neuropathic pain, chronic pain, treatment, gabapentin

Procedia PDF Downloads 82
9483 Numerical Simulation of Phase Transfer during Cryosurgery for an Irregular Tumor Using Hybrid Approach

Authors: Rama Bhargava, Surabhi Nishad

Abstract:

The infusion of nanofluids has dramatically enhanced the heat-carrying capacity of the fluids, applicable to many engineering and medical process where the temperature below freezing is required. Cryosurgery is an efficient therapy for the treatment of cancer, but sometimes the excessive cooling may harm the nearby healthy cells. Efforts are therefore done to develop a model which can cause to generate the low temperature as required. In the present study, a mathematical model is developed based on the bioheat transfer equation to simulate the heat transfer from the probe on a tumor (with irregular domain) using the hybrid technique consisting of element free Galerkin method with αα-family of approximation. The probe is loaded will nano-particles. The effects of different nanoparticles, namely Al₂O₃, Fe₃O₄, Au on the heat-producing rate, is obtained. It is observed that the temperature can be brought to (60°C)-(-30°C) at a faster freezing rate on the infusion of different nanoparticles. Besides increasing the freezing rate, the volume of the nanoparticle can also control the size and growth of ice crystals formed during the freezing process. The study is also made to find the time required to achieve the desired temperature. The problem is further extended for multi tumors of different shapes and sizes. The irregular shape of the frozen domain and the direction of ice growth are very sensitive issues, posing a challenge for simulation. The Meshfree method has been one of the accurate methods in such problems as a domain is naturally irregular. The discretization is done using the nodes only. MLS approximation is taken in order to generate the shape functions. Sufficiently accurate results are obtained.

Keywords: cryosurgery, EFGM, hybrid, nanoparticles

Procedia PDF Downloads 107
9482 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 122
9481 Improving the Health of Communities: Students as Leaders in a Community Clinical Health Promotion and Disease Prevention Immersion

Authors: Samawi Zepure, Beck Christine, Gallagher Peg

Abstract:

This community immersion employs the NLN Excellence Model which challenges nursing programs to create student-centered, interactive, and innovative experiences to prepare students for roles in providing high quality care, effective teaching, and leadership in the delivery of nursing services to individuals, families, and communities (NLN, 2006). Senior nursing students collaborate with ethnically and linguistically diverse participants at community-based sites and develop leadership roles of coordination of care linkage within the larger healthcare system, adherence, and self-care management. The immersion encourages students to develop competencies of the NLN Nursing Education Competencies Model (NLN, 2012), proposed to address fast changes in health care delivery, which include values of caring, diversity, and holism; and integrating concepts of context and environment, relationship, and teamwork. Students engage in critical thinking and leadership as they: 1) assess health/illness beliefs, values, attitudes, and practices, explore community resources, interview key informants, and collaborate with community participants to identify learning goals, 2) develop and implement appropriate holistic health promotion and disease prevention teaching interventions promoting continuity, sustainability, and innovation, 3) evaluate interventions through participant feedback and focus groups and, 4) reflect on the immersion experience and future professional role as advocate and citizen.

Keywords: quality of care, health of communities, students as leaders, health promotion

Procedia PDF Downloads 141
9480 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: battery energy storage system, electrical network frequency stability, frequency control unit, PowerFactor

Procedia PDF Downloads 111
9479 Factors Influencing Household Expenditure Patterns on Cereal Grains in Nasarawa State, Nigeria

Authors: E. A. Ojoko, G. B. Umbugadu

Abstract:

This study aims at describing the expenditure pattern of households on millet, maize and sorghum across income groups in Nasarawa State. A multi-stage sampling technique was used to select a sample size of 316 respondents for the study. The Almost Ideal Demand System (AIDS) model was adopted in this study. Results from the study shows that the average household size was five persons with dependency ratio of 52 %, which plays an important role on the household’s expenditure pattern by increasing the household budget share. On the average 82 % were male headed households with an average age of 49 years and 13 years of formal education. Results on expenditure share show that maize has the highest expenditure share of 38 % across the three income groups and that most of the price effects are significantly different from zero at 5 % significant level. This shows that the low price of maize increased its demand as compared to other cereals. Household size and age of household members are major factors affecting the demand for cereals in the study. This agrees with the fact that increased household population (size) will bring about increase consumption. The results on factors influencing preferences for cereal grains reveals that cooking quality and appearance (65.7 %) were the most important factors affecting the demand for maize in the study area. This study recommends that cereal crop production should be prioritized in government policies and farming activities that help to boost food security and alleviate poverty should be subsidized.

Keywords: expenditure pattern, AIDS model, budget share, price cereal grains and consumption

Procedia PDF Downloads 178
9478 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces

Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech

Abstract:

Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.

Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate

Procedia PDF Downloads 231
9477 Between a Rock and a Hard Place: The Possible Roles of Eternity Clauses in the Member States of the European Union

Authors: Zsuzsa Szakaly

Abstract:

Several constitutions have explicit or implicit eternity clauses in the European Union, their classic roles were analyzed so far, albeit there are new possibilities emerging in relation to the identity of the constitutions of the Member States. The aim of the study is to look at the practice of the Constitutional Courts of the Member States in detail regarding eternity clauses where limiting constitutional amendment has practical bearing, and to examine the influence of such practice on Europeanization. There are some states that apply explicit eternity clauses embedded in the text of the constitution, e.g., Italy, Germany, and Romania. In other states, the Constitutional Court 'unearthed' the implicit eternity clauses from the text of the basic law, e.g., Slovakia and Croatia. By using comparative analysis to examine the explicit or implicit clauses of the concerned constitutions, taking into consideration the new trends of the judicial opinions of the Member States and the fresh scientific studies, the main questions are: How to wield the double-edged sword of eternity clauses? To support European Integration or to support the sovereignty of the Member State? To help Europeanization or to act against it? Eternity clauses can easily find themselves between a rock and a hard place, the law of the European Union and the law of a Member State, with more possible interpretations. As more and more Constitutional Courts started to declare elements of their Member States’ constitutional identities, these began to interfere with the eternity clauses. Will this trend eventually work against Europeanization? As a result of the research, it can be stated that a lowest common denominator exists in the practice of European Constitutional Courts regarding eternity clauses. The chance of a European model and the possibility of this model influencing the status quo between the European Union and the Member States will be examined by looking at the answers these courts have found so far.

Keywords: constitutional court, constitutional identity, eternity clause, European Integration

Procedia PDF Downloads 123
9476 Socio-Motor Experience between Affectivity and Movement from Harry Potter to Lord of the Rings

Authors: Manuela Gamba, Niki Mandolesi

Abstract:

Teenagers today have little knowledge about how to move or play together. The adults who are part of sports culture must find an effective way to foster this essential ability. Our research in Italy uses a 'holistic model' based on fantasy literature to explore the relationships between the game identities and self-identities of young people and the achievement of psycho-motor, emotional and social well-being in the realms of sport and education. Physical activity projects were carried out in schools and extra-curricular associations in Rome, combining outdoor activities and distance learning. This holistic and malleable game model is inspired by fantasy accounts of the journeys taken in The Lord of Rings and Harry Potter books. We know that many have a lot of resistance to the idea of using fantasy and play as a pedagogical tool, but the results obtained in this experience are surprising. Our interventions and investigations focused on promoting self-esteem, awareness, a sense of belonging, social integration, cooperation, well-being, and informed decision making: a basis for healthy and effective citizenship. For teenagers, creative thinking is the right stimulus to involve and compare the story of characters to their own journey through social and self-reflective identity analysis. We observed how important it is to engage students emotionally as well as cognitively and that enabling them to play with identity through relationships with peers. There is a need today for a multidisciplinary synthesis of analog and digital values, especially in response to recent distance-living experiences. There is a need for a global reconceptualization of free time and nature in the human experience.

Keywords: awareness, creativity, identity, play

Procedia PDF Downloads 179
9475 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study

Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod

Abstract:

Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive vol­ume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of devel­oped beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB compo­nent module) placed at the end of the electron applicator, was done using previously vali­dated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves re­sulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV elec­tron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junc­tion of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.

Keywords: abutting fields, electron beam, radiation therapy, spoilers

Procedia PDF Downloads 147
9474 Development of Method for Recovery of Nickel from Aqueous Solution Using 2-Hydroxy-5-Nonyl- Acetophenone Oxime Impregnated on Activated Charcoal

Authors: A. O. Adebayo, G. A. Idowu, F. Odegbemi

Abstract:

Investigations on the recovery of nickel from aqueous solution using 2-hydroxy-5-nonyl- acetophenone oxime (LIX-84I) impregnated on activated charcoal was carried out. The LIX-84I was impregnated onto the pores of dried activated charcoal by dry method and optimum conditions for different equilibrium parameters (pH, adsorbent dosage, extractant concentration, agitation time and temperature) were determined using a simulated solution of nickel. The kinetics and adsorption isotherm studies were also evaluated. It was observed that the efficiency of recovery with LIX-84I impregnated on charcoal was dependent on the pH of the aqueous solution as there was little or no recovery at pH below 4. However, as the pH was raised, percentage recovery increases and peaked at pH 5.0. The recovery was found to increase with temperature up to 60ºC. Also it was observed that nickel adsorbed onto the loaded charcoal best at a lower concentration (0.1M) of the extractant when compared with higher concentrations. Similarly, a moderately low dosage (1 g) of the adsorbent showed better recovery than larger dosages. These optimum conditions were used to recover nickel from the leachate of Ni-MH batteries dissolved with sulphuric acid, and a 99.6% recovery was attained. Adsorption isotherm studies showed that the equilibrium data fitted best to Temkin model, with a negative value of constant, b (-1.017 J/mol) and a high correlation coefficient, R² of 0.9913. Kinetic studies showed that the adsorption process followed a pseudo-second order model. Thermodynamic parameter values (∆G⁰, ∆H⁰, and ∆S⁰) showed that the adsorption was endothermic and spontaneous. The impregnated charcoal appreciably recovered nickel using a relatively smaller volume of extractant than what is required in solvent extraction. Desorption studies showed that the loaded charcoal is reusable for three times, and so might be economical for nickel recovery from waste battery.

Keywords: charcoal, impregnated, LIX-84I, nickel, recovery

Procedia PDF Downloads 128
9473 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements

Authors: M. A. García, J. Vinolas, A. Hernando

Abstract:

Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.

Keywords: magnetoelastic, magnetic induction, mechanical stress, steel

Procedia PDF Downloads 10
9472 The Amorphousness of the Exposure Sphere

Authors: Nipun Ansal

Abstract:

People guard their beliefs and opinions with their lives. Beliefs that they’ve formed over a period of time, and can go to any lengths to defy, desist from, resist and negate any outward stimulus that has the potential to shake them. Cognitive dissonance is term used to describe it in theory. And every human being, in order to defend himself from cognitive dissonance applies 4 rings of defense viz. Selective Exposure, Selective Perception, Selective Attention, and Selective Retention. This paper is a discursive analysis on how the onslaught of social media, complete with its intrusive weaponry, has amorphized the external ring of defense: the selective exposure. The stimulus-response model of communication is one of the most inherent model that encompasses communication behaviours of children and elderly, individual and masses, humans and animals alike. The paper deliberates on how information bombardment through the uncontrollable channels of the social media, Facebook and Twitter in particular, have dismantled our outer sphere of exposure, leading users online to a state of constant dissonance, and thus feeding impulsive action-taking. It applies case study method citing an example to corroborate how knowledge generation has given in to the information overload and the effect it has on decision making. With stimulus increasing in number of encounters, opinion formation precedes knowledge because of the increased demand of participation and decrease in time for the information to permeate from the outer sphere of exposure to the sphere of retention, which of course, is through perception and attention. This paper discusses the challenge posed by this fleeting, stimulus rich, peer-dominated media on the traditional models of communication and meaning-generation.

Keywords: communication, discretion, exposure, social media, stimulus

Procedia PDF Downloads 390
9471 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor

Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez

Abstract:

Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.

Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste

Procedia PDF Downloads 95
9470 The Basketball Show in the North of France: When the NBA Globalized Culture Meets the Local Carnival Culture

Authors: David Sudre

Abstract:

Today, the National Basketball Association (NBA) is the cultural model of reference for most of the French basketball community stakeholders (players, coaches, team and league managers). In addition to the strong impact it has on how this sport is played and perceived, the NBA also influences the ways professional basketball shows are organized in France (within the Jeep Elite league). The objective of this research is to see how and to what extent the NBA show, as a globalized cultural product, disrupts Jeep Elite's professional basketball cultural codes in the organization of its shows. The article will aim at questioning the intercultural phenomenon at stake in sports cultures in France through the prism of the basketball match. This angle will shed some light on the underlying relationships between local and global elements. The results of this research come from a one-year survey conducted in a small town in northern France, Le Portel, where the Etoile Sportive Saint Michel (ESSM), a Jeep Elite's club, operates. An ethnographic approach was favored. It entailed many participating observations and semi-directive interviews with supporters of the ESSM Le Portel. Through this ethnographic work with the team's fan groups (before the games, during the games and after the games), it was possible for the researchers to understand better all the cultural and identity issues that play out in the "Cauldron," the basketball arena of the ESSM Le Portel. The results demonstrate, at first glance, that many basketball events organized in France are copied from the American model. It seems difficult not to try to imitate the American reference that the NBA represents, whether it be at the French All-Star Game or a Jeep Elite Game at Le Portel. In this case, an acculturation process seems to occur, not only in the way people play but also in the creation of the show (cheerleaders, animations, etc.). However, this American culture of globalized basketball, although re-appropriated, is also being modified by the members of ESSM Le Portel within their locality. Indeed, they juggle between their culture of origin and their culture of reference to build their basketball show within their sociocultural environment. In this way, Le Portel managers and supporters introduce elements that are characteristic of their local culture into the show, such as carnival customs and celebrations, two ingredients that fully contribute to the creation of their identity. Ultimately, in this context of "glocalization," this research will ascertain, on the one hand, that the identity of French basketball becomes harder to outline, and, on the other hand, that the "Cauldron" turns out to be a place to preserve (fantasized) local identities, or even a place of (unconscious) resistance to the dominant model of American basketball culture.

Keywords: basketball, carnival, culture, globalization, identity, show, sport, supporters.

Procedia PDF Downloads 132
9469 Using the SMT Solver to Minimize the Latency and to Optimize the Number of Cores in an NoC-DSP Architectures

Authors: Imen Amari, Kaouther Gasmi, Asma Rebaya, Salem Hasnaoui

Abstract:

The problem of scheduling and mapping data flow applications on multi-core architectures is notoriously difficult. This difficulty is related to the rapid evaluation of Telecommunication and multimedia systems accompanied by a rapid increase of user requirements in terms of latency, execution time, consumption, energy, etc. Having an optimal scheduling on multi-cores DSP (Digital signal Processors) platforms is a challenging task. In this context, we present a novel technic and algorithm in order to find a valid schedule that optimizes the key performance metrics particularly the Latency. Our contribution is based on Satisfiability Modulo Theories (SMT) solving technologies which is strongly driven by the industrial applications and needs. This paper, describe a scheduling module integrated in our proposed Workflow which is advised to be a successful approach for programming the applications based on NoC-DSP platforms. This workflow transform automatically a Simulink model to a synchronous dataflow (SDF) model. The automatic transformation followed by SMT solver scheduling aim to minimize the final latency and other software/hardware metrics in terms of an optimal schedule. Also, finding the optimal numbers of cores to be used. In fact, our proposed workflow taking as entry point a Simulink file (.mdl or .slx) derived from embedded Matlab functions. We use an approach which is based on the synchronous and hierarchical behavior of both Simulink and SDF. Whence, results of running the scheduler which exist in the Workflow mentioned above using our proposed SMT solver algorithm refinements produce the best possible scheduling in terms of latency and numbers of cores.

Keywords: multi-cores DSP, scheduling, SMT solver, workflow

Procedia PDF Downloads 268
9468 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading

Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein

Abstract:

Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.

Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound

Procedia PDF Downloads 178
9467 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 231
9466 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning

Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath

Abstract:

The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.

Keywords: BLIP, fMRI, latent diffusion model, neural perception.

Procedia PDF Downloads 52
9465 Prediction of Covid-19 Cases and Current Situation of Italy and Its Different Regions Using Machine Learning Algorithm

Authors: Shafait Hussain Ali

Abstract:

Since its outbreak in China, the Covid_19 19 disease has been caused by the corona virus SARS N coyote 2. Italy was the first Western country to be severely affected, and the first country to take drastic measures to control the disease. In start of December 2019, the sudden outbreaks of the Coronary Virus Disease was caused by a new Corona 2 virus (SARS-CO2) of acute respiratory syndrome in china city Wuhan. The World Health Organization declared the epidemic a public health emergency of international concern on January 30, 2020,. On February 14, 2020, 49,053 laboratory-confirmed deaths and 1481 deaths have been reported worldwide. The threat of the disease has forced most of the governments to implement various control measures. Therefore it becomes necessary to analyze the Italian data very carefully, in particular to investigates and to find out the present condition and the number of infected persons in the form of positive cases, death, hospitalized or some other features of infected persons will clear in simple form. So used such a model that will clearly shows the real facts and figures and also understandable to every readable person which can get some real benefit after reading it. The model used must includes(total positive cases, current positive cases, hospitalized patients, death, recovered peoples frequency rates ) all features that explains and clear the wide range facts in very simple form and helpful to administration of that country.

Keywords: machine learning tools and techniques, rapid miner tool, Naive-Bayes algorithm, predictions

Procedia PDF Downloads 89
9464 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 339
9463 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 155