Search results for: non-linear modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5074

Search results for: non-linear modeling

274 A Regional Analysis on Co-movement of Sovereign Credit Risk and Interbank Risks

Authors: Mehdi Janbaz

Abstract:

The global financial crisis and the credit crunch that followed magnified the importance of credit risk management and its crucial role in the stability of all financial sectors and the whole of the system. Many believe that risks faced by the sovereign sector are highly interconnected with banking risks and most likely to trigger and reinforce each other. This study aims to examine (1) the impact of banking and interbank risk factors on the sovereign credit risk of Eurozone, and (2) how the EU Credit Default Swaps spreads dynamics are affected by the Crude Oil price fluctuations. The hypothesizes are tested by employing fitting risk measures and through a four-staged linear modeling approach. The sovereign senior 5-year Credit Default Swap spreads are used as a core measure of the credit risk. The monthly time-series data of the variables used in the study are gathered from the DataStream database for a period of 2008-2019. First, a linear model test the impact of regional macroeconomic and market-based factors (STOXX, VSTOXX, Oil, Sovereign Debt, and Slope) on the CDS spreads dynamics. Second, the bank-specific factors, including LIBOR-OIS spread (the difference between the Euro 3-month LIBOR rate and Euro 3-month overnight index swap rates) and Euribor, are added to the most significant factors of the previous model. Third, the global financial factors including EURO to USD Foreign Exchange Volatility, TED spread (the difference between 3-month T-bill and the 3-month LIBOR rate based in US dollars), and Chicago Board Options Exchange (CBOE) Crude Oil Volatility Index are added to the major significant factors of the first two models. Finally, a model is generated by a combination of the major factor of each variable set in addition to the crisis dummy. The findings show that (1) the explanatory power of LIBOR-OIS on the sovereign CDS spread of Eurozone is very significant, and (2) there is a meaningful adverse co-movement between the Crude Oil price and CDS price of Eurozone. Surprisingly, adding TED spread (the difference between the three-month Treasury bill and the three-month LIBOR based in US dollars.) to the analysis and beside the LIBOR-OIS spread (the difference between the Euro 3M LIBOR and Euro 3M OIS) in third and fourth models has been increased the predicting power of LIBOR-OIS. Based on the results, LIBOR-OIS, Stoxx, TED spread, Slope, Oil price, OVX, FX volatility, and Euribor are the determinants of CDS spreads dynamics in Eurozone. Moreover, the positive impact of the crisis period on the creditworthiness of the Eurozone is meaningful.

Keywords: CDS, crude oil, interbank risk, LIBOR-OIS, OVX, sovereign credit risk, TED

Procedia PDF Downloads 144
273 The Phenomenon of the Seawater Intrusion with Fresh Groundwater in the Arab Region

Authors: Kassem Natouf, Ihab Jnad

Abstract:

In coastal aquifers, the interface between fresh groundwater and salty seawater may shift inland, reaching coastal wells and causing an increase in the salinity of the water they pump, putting them out of service. Many Arab coastal sites suffer from this phenomenon due to the increased pumping of coastal groundwater. This research aims to prepare a comprehensive study describing the common characteristics of the phenomenon of seawater intrusion with coastal freshwater aquifers in the Arab region, its general and specific causes and negative effects, in a way that contributes to overcoming this phenomenon, and to exchanging expertise between Arab countries in studying and analyzing it, leading to overcoming it. This research also aims to build geographical and relational databases for data, information and studies available in Arab countries about seawater intrusion with freshwater so as to provide the data and information necessary for managing groundwater resources on Arab coasts, including studying the effects of climate change on these resources and helping decision-makers in developing executive programs to overcome the seawater intrusion with groundwater. The research relied on the methodology of analysis and comparison, where the available information and data about the phenomenon in the Arab region were collected. After that, the information and data collected were studied and analyzed, and the causes of the phenomenon in each case, its results, and solutions for prevention were stated. Finally, the different cases were compared, and the common causes, results, and methods of treatment between them were deduced, and a technical report summarizing that was prepared. To overcome the phenomenon of seawater intrusion with fresh groundwater: (1) It is necessary to develop efforts to monitor the quantity and quality of groundwater on the coasts and to develop mathematical models to predict the impact of climate change, sea level rise, and human activities on coastal groundwater. (2) Over-pumping of coastal aquifers is an important cause of seawater intrusion. To mitigate this problem, Arab countries should reduce groundwater pumping and promote rainwater harvesting, surface irrigation, and water recycling practices. (3) Artificial recharge of coastal groundwater with various forms of water, whether fresh or treated, is a promising technology to mitigate the effects of seawater intrusion.

Keywords: coastal aquifers, seawater intrusion, fresh groundwater, salinity increase, Arab region, groundwater management, climate change effects, sustainable water practices, over-pumping, artificial recharge, monitoring and modeling, data databases, groundwater resources, negative effects, comparative analysis, technical report, water scarcity, groundwater quality, decision-making, environmental impact, agricultural practices

Procedia PDF Downloads 34
272 Lifelong Learning in Applied Fields (LLAF) Tempus Funded Project: Assessing Constructivist Learning Features in Higher Education Settings

Authors: Dorit Alt, Nirit Raichel

Abstract:

Educational practice is continually subjected to renewal needs, due mainly to the growing proportion of information communication technology, globalization of education, and the pursuit of quality. These types of renewal needs require developing updated instructional and assessment practices that put a premium on adaptability to the emerging requirements of present society. However, university instruction is criticized for not coping with these new challenges while continuing to exemplify the traditional instruction. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium (including 16 members from 8 countries) is collaborating to create a curricular reform for lifelong learning (LLL) in teachers' education, health care and other applied fields. This project aims to achieve its objectives by developing, and piloting models for training students in LLL and promoting meaningful learning activities that could integrate knowledge with the personal transferable skills. LLAF has created a practical guide for teachers containing updated pedagogical strategies and assessment tools based on the constructivist approach for learning. This presentation will be limited to teachers' education only and to the contribution of a pre-pilot research aimed at providing a scale designed to measure constructivist activities in higher education learning environments. A mix-method approach was implemented in two phases to construct the scale: The first phase included a qualitative content analysis involving both deductive and inductive category applications of students' observations. The results foregrounded eight categories: knowledge construction, authenticity, multiple perspectives, prior knowledge, in-depth learning, teacher- student interaction, social interaction and cooperative dialogue. The students' descriptions of their classes were formulated as 36 items. The second phase employed structural equation modeling (SEM). The scale was submitted to 597 undergraduate students. The goodness of fit of the data to the structural model yielded sufficient fit results. This research elaborates the body of literature by adding a category of in-depth learning which emerged from the content analysis. Moreover, the theoretical category of social activity has been extended to include two distinctive factors: cooperative dialogue and social interaction. Implications of these findings for the LLAF project are discussed.

Keywords: constructivist learning, higher education, mix-methodology, lifelong learning

Procedia PDF Downloads 334
271 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 69
270 How Consumers Perceive Health and Nutritional Information and How It Affects Their Purchasing Behavior: Comparative Study between Colombia and the Dominican Republic

Authors: Daniel Herrera Gonzalez, Maria Luisa Montas

Abstract:

There are some factors affecting consumer decision-making regarding the use of the front of package labels in order to find benefits to the well-being of the human being. Currently, there are several labels that help influence or change the purchase decision for food products. These labels communicate the impact that food has on human health; therefore, consumers are more critical and intelligent when buying and consuming food products. The research explores the association between front-of-pack labeling and food choice; the association between label content and purchasing decisions is complex and influenced by different factors, including the packaging itself. The main objective of this study was to examine the perception of health labels and nutritional declarations and their influence on buying decisions in the non-alcoholic beverages sector. This comparative study of two developing countries will show how consumers take nutritional labels into account when deciding to buy certain foods. This research applied a quantitative methodology with correlational scope. This study has a correlational approach in order to analyze the degree of association between variables. Likewise, the confirmatory factor analysis (CFA) method and structural equation modeling (SEM) as a powerful multivariate technique was used as statistical technique to find the relationships between observable and unobservable variables. The main findings of this research were the obtaining of three large groups and their perception and effects on nutritional and wellness labels. The first group is characterized by taking an attitude of high interest on the issue of the imposition of the nutritional information label on products and would agree that all products should be packaged given its importance to preventing illnesses in the consumer. Likewise, they almost always care about the brand, the size, the list of ingredients, and nutritional information of the food, and also the effect of these on health. The second group stands out for presenting some interest in the importance of the label on products as a purchase decision, in addition to almost always taking into account the characteristics of size, money, components, etc. of the products to decide on their consumption and almost always They are never interested in the effect of these products on their health or nutrition, and in group 3, it differs from the others by being more neutral regarding the issue of nutritional information labels, and being less interested in the purchase decision and characteristics of the product and also on the influence of these on health and nutrition. This new knowledge is essential for different companies that manufacture and market food products because they will have information to adapt or anticipate the new laws of developing countries as well as the new needs of health-conscious consumers when they buy food products.

Keywords: healthy labels, consumer behavior, nutritional information, healthy products

Procedia PDF Downloads 107
269 Assessment of the Effects of Urban Development on Urban Heat Islands and Community Perception in Semi-Arid Climates: Integrating Remote Sensing, GIS Tools, and Social Analysis - A Case Study of the Aures Region (Khanchela), Algeria

Authors: Amina Naidja, Zedira Khammar, Ines Soltani

Abstract:

This study investigates the impact of urban development on the urban heat island (UHI) effect in the semi-arid Aures region of Algeria, integrating remote sensing data with statistical analysis and community surveys to examine the interconnected environmental and social dynamics. Using Landsat 8 satellite imagery, temporal variations in the Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-up Index (NDBI), and land use/land cover (LULC) changes are analyzed to understand patterns of urbanization and environmental transformation. These environmental metrics are correlated with land surface temperature (LST) data derived from remote sensing to quantify the UHI effect. To incorporate the social dimension, a structured questionnaire survey is conducted among residents in selected urban areas. The survey assesses community perceptions of urban heat, its impacts on daily life, health concerns, and coping strategies. Statistical analysis is employed to analyze survey responses, identifying correlations between demographic factors, socioeconomic status, and perceived heat stress. Preliminary findings reveal significant correlations between built-up areas (NDBI) and higher LST, indicating the contribution of urbanization to local warming. Conversely, areas with higher vegetation cover (NDVI) exhibit lower LST, highlighting the cooling effect of green spaces. Social survey results provide insights into how UHI affects different demographic groups, with vulnerable populations experiencing greater heat-related challenges. By integrating remote sensing analysis with statistical modeling and community surveys, this study offers a comprehensive understanding of the environmental and social implications of urban development in semi-arid climates. The findings contribute to evidence-based urban planning strategies that prioritize environmental sustainability and social well-being. Future research should focus on policy recommendations and community engagement initiatives to mitigate UHI impacts and promote climate-resilient urban development.

Keywords: urban heat island, remote sensing, social analysis, NDVI, NDBI, LST, community perception

Procedia PDF Downloads 41
268 Trajectories of Conduct Problems and Cumulative Risk from Early Childhood to Adolescence

Authors: Leslie M. Gutman

Abstract:

Conduct problems (CP) represent a major dilemma, with wide-ranging and long-lasting individual and societal impacts. Children experience heterogeneous patterns of conduct problems; based on the age of onset, developmental course and related risk factors from around age 3. Early childhood represents a potential window for intervention efforts aimed at changing the trajectory of early starting conduct problems. Using the UK Millennium Cohort Study (n = 17,206 children), this study (a) identifies trajectories of conduct problems from ages 3 to 14 years and (b) assesses the cumulative and interactive effects of individual, family and socioeconomic risk factors from ages 9 months to 14 years. The same factors according to three domains were assessed, including child (i.e., low verbal ability, hyperactivity/inattention, peer problems, emotional problems), family (i.e., single families, parental poor physical and mental health, large family size) and socioeconomic (i.e., low family income, low parental education, unemployment, social housing). A cumulative risk score for the child, family, and socioeconomic domains at each age was calculated. It was then examined how the cumulative risk scores explain variation in the trajectories of conduct problems. Lastly, interactive effects among the different domains of cumulative risk were tested. Using group-based trajectory modeling, four distinct trajectories were found including a ‘low’ problem group and three groups showing childhood-onset conduct problems: ‘school-age onset’; ‘early-onset, desisting’; and ‘early-onset, persisting’. The ‘low’ group (57% of the sample) showed a low probability of conducts problems, close to zero, from 3 to 14 years. The ‘early-onset, desisting’ group (23% of the sample) demonstrated a moderate probability of CP in early childhood, with a decline from 3 to 5 years and a low probability thereafter. The ‘early-onset, persistent’ group (8%) followed a high probability of conduct problems, which declined from 11 years but was close to 70% at 14 years. In the ‘school-age onset’ group, 12% of the sample showed a moderate probability of conduct problems from 3 and 5 years, with a sharp increase by 7 years, increasing to 50% at 14 years. In terms of individual risk, all factors increased the likelihood of being in the childhood-onset groups compared to the ‘low’ group. For cumulative risk, the socioeconomic domain at 9 months and 3 years, the family domain at all ages except 14 years and child domain at all ages were found to differentiate childhood-onset groups from the ‘low’ group. Cumulative risk at 9 months and 3 years did not differentiate between the ‘school-onset’ group and ‘low’ group. Significant interactions were found between the domains for the ‘early-onset, desisting group’ suggesting that low levels of risk in one domain may buffer the effects of high risk in another domain. The implications of these findings for preventive interventions will be highlighted.

Keywords: conduct problems, cumulative risk, developmental trajectories, early childhood, adolescence

Procedia PDF Downloads 251
267 Adaptive Environmental Control System Strategy for Cabin Air Quality in Commercial Aircrafts

Authors: Paolo Grasso, Sai Kalyan Yelike, Federico Benzi, Mathieu Le Cam

Abstract:

The cabin air quality (CAQ) in commercial aircraft is of prime interest, especially in the context of the COVID-19 pandemic. Current Environmental Control Systems (ECS) rely on a prescribed fresh airflow per passenger to dilute contaminants. An adaptive ECS strategy is proposed, leveraging air sensing and filtration technologies to ensure a better CAQ. This paper investigates the CAQ level achieved in commercial aircraft’s cabin during various flight scenarios. The modeling and simulation analysis is performed in a Modelica-based environment describing the dynamic behavior of the system. The model includes the following three main systems: cabin, recirculation loop and air-conditioning pack. The cabin model evaluates the thermo-hygrometric conditions and the air quality in the cabin depending on the number of passengers and crew members, the outdoor conditions and the conditions of the air supplied to the cabin. The recirculation loop includes models of the recirculation fan, ordinary and novel filtration technology, mixing chamber and outflow valve. The air-conditioning pack includes models of heat exchangers and turbomachinery needed to condition the hot pressurized air bled from the engine, as well as selected contaminants originated from the outside or bled from the engine. Different ventilation control strategies are modeled and simulated. Currently, a limited understanding of contaminant concentrations in the cabin and the lack of standardized and systematic methods to collect and record data constitute a challenge in establishing a causal relationship between CAQ and passengers' comfort. As a result, contaminants are neither measured nor filtered during flight, and the current sub-optimal way to avoid their accumulation is their dilution with the fresh air flow. However, the use of a prescribed amount of fresh air comes with a cost, making the ECS the most energy-demanding non-propulsive system within an aircraft. In such a context, this study shows that an ECS based on a reduced and adaptive fresh air flow, and relying on air sensing and filtration technologies, provides promising results in terms of CAQ control. The comparative simulation results demonstrate that the proposed adaptive ECS brings substantial improvements to the CAQ in terms of both controlling the asymptotic values of the concentration of the contaminant and in mitigating hazardous scenarios, such as fume events. Original architectures allowing for adaptive control of the inlet air flow rate based on monitored CAQ will change the requirements for filtration systems and redefine the ECS operation.

Keywords: cabin air quality, commercial aircraft, environmental control system, ventilation

Procedia PDF Downloads 101
266 Grassland Development on Evacuated Sites for Wildlife Conservation in Satpura Tiger Reserve, India

Authors: Anjana Rajput, Sandeep Chouksey, Bhaskar Bhandari, Shimpi Chourasia

Abstract:

Ecologically, grassland is any plant community dominated by grasses, whether they exist naturally or because of management practices. Most forest grasslands are anthropogenic and established plant communities planted for forage production, though some are established for soil and water conservation and wildlife habitat. In Satpura Tiger Reserve, Madhya Pradesh, India, most of the grasslands have been established on evacuated village sites. Total of 42 villages evacuated, and study was carried out in 23 sites to evaluate habitat improvement. Grasslands were classified into three categories, i.e., evacuated sites, established sites, and controlled sites. During the present study impact of various management interventions on grassland health was assessed. Grasslands assessment was done for its composition, status of palatable and non-palatable grasses, the status of herbs and legumes, status of weeds species, and carrying capacity of particular grassland. Presence of wild herbivore species in the grasslands with their abundance, availability of water resources was also assessed. Grassland productivity is dependent mainly on the biotic and abiotic components of the area, but management interventions may also play an important role in grassland composition and productivity. Variation in the status of palatable and non-palatable grasses, legumes, and weeds was recorded and found effected by management intervention practices. Overall in all the studied grasslands, the most dominant grasses recorded are Themeda quadrivalvis, Dichanthium annulatum, Ischaemum indicum, Oplismenus burmanii, Setaria pumilla, Cynodon dactylon, Heteropogon contortus, and Eragrostis tenella. Presence of wild herbivores, i.e., Chital, Sambar, Bison, Bluebull, Chinkara, Barking deer in the grassland area has been recorded through the installation of camera traps and estimated their abundance. Assessment of developed grasslands was done in terms of habitat suitability for Chital (Axis axis) and Sambar (Rusa unicolor). The parameters considered for suitability modeling are biotic and abiotic life requisite components existing in the area, i.e., density of grasses, density of legumes, availability of water, site elevation, site distance from human habitation. Findings of the present study would be useful for further grassland management and animal translocation programmes.

Keywords: carrying capacity, dominant grasses, grassland, habitat suitability, management intervention, wild herbivore

Procedia PDF Downloads 127
265 Effects of Parental Socio-Economic Status and Individuals' Educational Achievement on Their Socio-Economic Status: A Study of South Korea

Authors: Eun-Jeong Jang

Abstract:

Inequality has been considered as a core issue in public policy. Korea is categorized into one of the countries in the high level of inequality, which matters to not only current but also future generations. The relationship between individuals' origin and destination has an implication of intergenerational inequality. The previous work on this was mostly conducted at macro level using panel data to our knowledge. However, in this level, there is no room to track down what happened during the time between origin and destination. Individuals' origin is represented by their parents' socio-economic status, and in the same way, destination is translated into their own socio-economic status. The first research question is that how origin is related to the destination. Certainly, destination is highly affected by origin. In this view, people's destination is already set to be more or less than a reproduction of previous generations. However, educational achievement is widely believed as an independent factor from the origin. From this point of view, there is a possibility to change the path given by parents by educational attainment. Hence, the second research question would be that how education is related to destination and also, which factor is more influential to destination between origin and education. Also, the focus lies in the mediation of education between origin and destination, which would be the third research question. Socio-economic status in this study is referring to class as a sociological term, as well as wealth including labor and capital income, as an economic term. The combination of class and wealth would be expected to give more accurate picture about the hierarchy in a society. In some cases of non-manual and professional occupations, even though they are categorized into relatively high class, their income is much lower than those who in the same class. Moreover, it is one way to overcome the limitation of the retrospective view during survey. Education is measured as an absolute term, the years of schooling, and also as a relative term, the rank of school. Moreover, all respondents were asked the effort scaled by time intensity, self-motivation, before and during the course of their college based on a standard questionnaire academic achieved model provides. This research is based on a survey at an individual level. The target for sampling is an individual who has a job, regardless of gender, including income-earners and self-employed people and aged between thirties and forties because this age group is considered to reach the stage of job stability. In most cases, the researcher met respondents person to person visiting their work place or home and had a chance to interview some of them. One hundred forty individual data collected from May to August in 2017. It will be analyzed by multiple regression (Q1, Q2) and structural equation modeling (Q3).

Keywords: class, destination, educational achievement, effort, income, origin, socio-economic status, South Korea

Procedia PDF Downloads 273
264 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 172
263 Explore and Reduce the Performance Gap between Building Modelling Simulations and the Real World: Case Study

Authors: B. Salehi, D. Andrews, I. Chaer, A. Gillich, A. Chalk, D. Bush

Abstract:

With the rapid increase of energy consumption in buildings in recent years, especially with the rise in population and growing economies, the importance of energy savings in buildings becomes more critical. One of the key factors in ensuring energy consumption is controlled and kept at a minimum is to utilise building energy modelling at the very early stages of the design. So, building modelling and simulation is a growing discipline. During the design phase of construction, modelling software can be used to estimate a building’s projected energy consumption, as well as building performance. The growth in the use of building modelling software packages opens the door for improvements in the design and also in the modelling itself by introducing novel methods such as building information modelling-based software packages which promote conventional building energy modelling into the digital building design process. To understand the most effective implementation tools, research projects undertaken should include elements of real-world experiments and not just rely on theoretical and simulated approaches. Upon review of the related studies undertaken, it’s evident that they are mostly based on modelling and simulation, which can be due to various reasons such as the more expensive and time-consuming nature of real-time data-based studies. Taking in to account the recent rise of building energy software modelling packages and the increasing number of studies utilising these methods in their projects and research, the accuracy and reliability of these modelling software packages has become even more crucial and critical. This Energy Performance Gap refers to the discrepancy between the predicted energy savings and the realised actual savings, especially after buildings implement energy-efficient technologies. There are many different software packages available which are either free or have commercial versions. In this study, IES VE (Integrated Environmental Solutions Virtual Environment) is used as it is a common Building Energy Modeling and Simulation software in the UK. This paper describes a study that compares real time results with those in a virtual model to illustrate this gap. The subject of the study is a north west facing north-west (345°) facing, naturally ventilated, conservatory within a domestic building in London is monitored during summer to capture real-time data. Then these results are compared to the virtual results of IES VE, which is a commonly used building energy modelling and simulation software in the UK. In this project, the effect of the wrong position of blinds on overheating is studied as well as providing new evidence of Performance Gap. Furthermore, the challenges of drawing the input of solar shading products in IES VE will be considered.

Keywords: building energy modelling and simulation, integrated environmental solutions virtual environment, IES VE, performance gap, real time data, solar shading products

Procedia PDF Downloads 139
262 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 274
261 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations

Authors: Asmamaw Yehun

Abstract:

The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.

Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour

Procedia PDF Downloads 69
260 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 176
259 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research

Authors: V. Bezverbny

Abstract:

The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.

Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification

Procedia PDF Downloads 193
258 Performance Analysis of Double Gate FinFET at Sub-10NM Node

Authors: Suruchi Saini, Hitender Kumar Tyagi

Abstract:

With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.

Keywords: current on-off ratio, FinFET, short-channel effects, transconductance

Procedia PDF Downloads 61
257 Risk Based Inspection and Proactive Maintenance for Civil and Structural Assets in Oil and Gas Plants

Authors: Mohammad Nazri Mustafa, Sh Norliza Sy Salim, Pedram Hatami Abdullah

Abstract:

Civil and structural assets normally have an average of more than 30 years of design life. Adding to this advantage, the assets are normally subjected to slow degradation process. Due to the fact that repair and strengthening work for these assets are normally not dependent on plant shut down, the maintenance and integrity restoration of these assets are mostly done based on “as required” and “run to failure” basis. However unlike other industries, the exposure in oil and gas environment is harsher as the result of corrosive soil and groundwater, chemical spill, frequent wetting and drying, icing and de-icing, steam and heat, etc. Due to this type of exposure and the increasing level of structural defects and rectification in line with the increasing age of plants, assets integrity assessment requires a more defined scope and procedures that needs to be based on risk and assets criticality. This leads to the establishment of risk based inspection and proactive maintenance procedure for civil and structural assets. To date there is hardly any procedure and guideline as far as integrity assessment and systematic inspection and maintenance of civil and structural assets (onshore) are concerned. Group Technical Solutions has developed a procedure and guideline that takes into consideration credible failure scenario, assets risk and criticality from process safety and structural engineering perspective, structural importance, modeling and analysis among others. Detailed inspection that includes destructive and non-destructive tests (DT & NDT) and structural monitoring is also being performed to quantify defects, assess severity and impact on integrity as well as identify the timeline for integrity restoration. Each defect and its credible failure scenario is assessed against the risk on people, environment, reputation and production loss. This technical paper is intended to share on the established procedure and guideline and their execution in oil & gas plants. In line with the overall roadmap, the procedure and guideline will form part of specialized solutions to increase production and to meet the “Operational Excellence” target while extending service life of civil and structural assets. As the result of implementation, the management of civil and structural assets is now more systematically done and the “fire-fighting” mode of maintenance is being gradually phased out and replaced by a proactive and preventive approach. This technical paper will also set the criteria and pose the challenge to the industry for innovative repair and strengthening methods for civil & structural assets in oil & gas environment, in line with safety, constructability and continuous modification and revamp of plant facilities to meet production demand.

Keywords: assets criticality, credible failure scenario, proactive and preventive maintenance, risk based inspection

Procedia PDF Downloads 404
256 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 118
255 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents

Authors: A. Kesraoui, M. Seffen

Abstract:

Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.

Keywords: adsorption, alternating current, dyes, modeling

Procedia PDF Downloads 160
254 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 224
253 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 174
252 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study

Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart

Abstract:

Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.

Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning

Procedia PDF Downloads 100
251 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 95
250 The Evaluation of the Cognitive Training Program for Older Adults with Mild Cognitive Impairment: Protocol of a Randomized Controlled Study

Authors: Hui-Ling Yang, Kuei-Ru Chou

Abstract:

Background: Studies show that cognitive training can effectively delay cognitive failure. However, there are several gaps in the previous studies of cognitive training in mild cognitive impairment: 1) previous studies enrolled mostly healthy older adults, with few recruiting older adults with cognitive impairment; 2) they also had limited generalizability and lacked long-term follow-up data and measurements of the activities of daily living functional impact. Moreover, only 37% were randomized controlled trials (RCT). 3) Limited cognitive training has been specifically developed for mild cognitive impairment. Objective: This study sought to investigate the changes in cognitive function, activities of daily living and degree of depressive symptoms in older adults with mild cognitive impairment after cognitive training. Methods: This double-blind randomized controlled study has a 2-arm parallel group design. Study subjects are older adults diagnosed with mild cognitive impairment in residential care facilities. 124 subjects will be randomized by the permuted block randomization, into intervention group (Cognitive training, CT), or active control group (Passive information activities, PIA). Therapeutic adherence, sample attrition rate, medication compliance and adverse events will be monitored during the study period, and missing data analyzed using intent-to-treat analysis (ITT). Results: Training sessions of the CT group are 45 minutes/day, 3 days/week, for 12 weeks (36 sessions each). The training of active control group is the same as CT group (45min/day, 3days/week, for 12 weeks, for a total of 36 sessions). The primary outcome is cognitive function, using the Mini-Mental Status Examination (MMSE); the secondary outcome indicators are: 1) activities of daily living, using the Lawton’s Instrumental Activities of Daily Living (IADLs) and 2) degree of depressive symptoms, using the Geriatric Depression Scale-Short form (GDS-SF). Latent growth curve modeling will be used in the repeated measures statistical analysis to estimate the trajectory of improvement by examining the rate and pattern of change in cognitive functions, activities of daily living and degree of depressive symptoms for intervention efficacy over time, and the effects will be evaluated immediate post-test, 3 months, 6 months and one year after the last session. Conclusions: We constructed a rigorous CT program adhering to the Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines. We expect to determine the improvement in cognitive function, activities of daily living and degree of depressive symptoms of older adults with mild cognitive impairment after using the CT.

Keywords: mild cognitive impairment, cognitive training, randomized controlled study

Procedia PDF Downloads 448
249 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 144
248 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 102
247 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 86
246 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 109
245 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy

Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury

Abstract:

Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.

Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling

Procedia PDF Downloads 73