Search results for: market prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5417

Search results for: market prediction

647 Evaluation of Environmental Management System Implementation of Construction Projects in Turkey

Authors: Aydemir Akyürek, Osman Nuri Ağdağ

Abstract:

Construction industry is in a rapid development for many years around the world and especially in Turkey. In the last three years sector has 10% growth and provides significant support on Turkey’s national economy. Many construction projects are on-going at urban and rural areas of Turkey which have substantial environmental impacts. Environmental impacts during construction phase are quite diversified and widespread. Environmental impacts of construction industry cannot be inspected properly in all cases and negative impacts may occur frequently in many projects in Turkey. In this study, implementation of ISO 14001 Environmental Management System (EMS) in construction plants is evaluated. In the beginning stage quality management systems generally reviewed and ISO 14001 EMS is selected for implementation. Standard requirements are examined first and implementation of every standard requirement is elaborated for the selected construction plant in the following stage. Key issues and common problems, gained benefits by execution of this type of international EMS standard are examined. As can be seen in sample projects, construction projects are being completed very fast and contractors are working in a highly competitive environment with low profit ratios in our country and mostly qualified work force cannot be accessible. Addition to this there are deficits on waste handling and environmental infrastructure. Besides construction companies which have substantial investments on EMSs can be faced with difficulties on competitiveness in domestic market, however professional Turkish contractors which implementing managements systems in larger scale at international projects are gaining successful results. Also the concept of ‘construction project management’ which is being implemented in successful projects worldwide cannot be implemented except larger projects in Turkey. In case of nonexistence of main management system (quality) implementation of EMSs cannot be managed. Despite all constraints, EMSs that will be implemented in this industry with commitment of top managements and demand of customers will be an enabling, facilitating tool to determine environmental aspects and impacts of construction sites, will provide higher compliance levels for environmental legislation, to establish best available methods for operational control on waste management, chemicals management etc. and to plan monitoring and measurement, to prioritize environmental aspects for investment schedules and waste management.

Keywords: environmental management system, construction projects, ISO 14001, quality

Procedia PDF Downloads 343
646 A Comparative Understanding of Critical Problems Faced by Pakistani and Indian Transportation Industry

Authors: Fawad Hussain, Saleh Abdullah Saleh, Mohammad Basir B Saud, Mohd Azwardi Md. Isa

Abstract:

It is very important for a developing nation to develop their infrastursture on the prime priority because their infrastursture particularly their roads and transporation functions as a blood in the system. Almost 1.1 billion populations share the travel and transportation industry in India. On the other hand, the Pakistan transportation industry is also extensive and elevating about 170 million users of transportation. Indian and Pakistani specifically within bus industry have good interconnectivity within and between the urban and rural areas as well as connectivity between the two countries, which is dramatically helping the economic alleviation of both countries. Due to high economic instability, unemployment and poverty rate are among the reasons why both the governments are very committed and seriously taken further action to help boost their economy. They believe that any form of transportation development would play a vital role in the development of land, infrastructure which could indirectly support many other industries’ development, such as tourism, freighting and shipping businesses, just to mention a few. However, it seems that their previous transportation planning in the due course has failed to meet the fast growing demand. As with the spin of time, both the countries are looking forward for a reasonable, safe and economical long term solutions, which is from time to time keep appreciating and reacting according to other key economic drivers. Content analysis method and case study approach is used in this paper and secondary data from the bureau of statistic is used for case analysis. The paper centered on the mobility concerns of the lower and middle income people in India and Pakistan. The paper is aimed to highlight the weaknesses, opportunities and limitations resulting from low priority industry for government, which is making the either country's public suffer. The paper has concluded that the main issue is identified as the slow, inappropriate and unfavorable decisions which are not in favor of long term country’s economic development and public welfare as well as interest. The paper also recommends to future market sense public and private transportation, which has failed to meet the public expectations.

Keywords: bus transportation industries, transportation demand, government parallel initiatives, road and traffic congestions

Procedia PDF Downloads 258
645 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 111
644 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 240
643 Identifying the Effects of the COVID-19 Pandemic on Syrian and Congolese Refugees’ Health and Economic Access in Central Pennsylvania

Authors: Mariam Shalaby, Kayla Krause, Raisha Ismail, Daniel George

Abstract:

Introduction: The Pennsylvania State College of Medicine Refugee Initiative is a student-run organization that works with eleven Syrian and Congolese refugee families. Since 2016, it has used grant funding to make weekly produce purchases at a local market, provide tutoring services, and develop trusting relationships. This case study explains how the Refugee Initiative shifted focus to face new challenges due to the COVID-19 pandemic in 2020. Methodology: When refugees who had previously attained stability found themselves unable to pay the bills, the organization shifted focus from food security to direct assistance such as applying for unemployment compensation since many had recently lost jobs. When refugee families additionally struggled to access hygiene supplies, funding was redirected to purchase them. Funds were also raised from the community to provide financial relief from unpaid rent and bills. Findings: Systemic challenges were encountered in navigating federal/state unemployment and social welfare systems, and there was a conspicuous absence of affordable, language-accessible assistance that could help refugees. Finally, as struggling public schools failed to maintain adequate English as a Second Language (ESL) education, the group’s tutoring services were hindered by social distancing and inconsistent access to distance-learning platforms. Conclusion: Ultimately, the pandemic highlighted that a charity-based arrangement is helpful but not sustainable, and challenges persist for refugee families. Based on the Refugee Initiative's experiences over the past year of the COVID-19 pandemic, several needs must be addressed to aid refugee families at this time, including: increased access to affordable and language-accessible social services, educational resources, and simpler options for grant-based financial assistance. Interventions to increase these resources will aid refugee families in need in Central Pennsylvania and internationally

Keywords: COVID-19, health, pandemic, refugees

Procedia PDF Downloads 109
642 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 209
641 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery

Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats

Abstract:

Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.

Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform

Procedia PDF Downloads 429
640 Fake Importers Behavior in the Algerian City – The Case of the City of Eulma

Authors: Mohamed Gherbi

Abstract:

The informal trade has invaded the Algerian cities, especially in their peripherals. About 1368 informal markets have been registrated during 2013 where the important ones are known by Doubaï Markets. They appeared since the adoption of the new system of the economy market in 1990. It permitted the intervention of new actors: the importers but also the fake ones. The majority of them were 'ex-Trabendistes' who have chosen to settle and invest in big and small cities of center and east of Algeria, mainly Algiers, El Eulma, Aïn El Fekroun, Tadjnenent, and Aïn M’lila. This study will focus on the case of the city of El Eulma which contains more of 1000 importers (most of them are fake). They have changed the image and architecture of some important streets of the city, without respecting rules of urbanism such as those included in the building permit for instance. The case of 'Doubaï' place in El Eulma illustrates this situation. This area is not covered by a Soil Occupation Plan (responsible of the design of urban spaces), even if this last covers other zones nearby surrounding of it. These importers helped by the wholesale and retail traders installed in 'Doubaï' place, have converted spaces inside and outside of residential buildings in deposits and sales of goods. They have squatted sidewalks to expose their goods imported predominantly from the South-East Asian countries. The scenery that reigns resembles partly to the bazaar of the Middle East and Chinese cities like Yiwu. These signs characterize the local ambiance and give the particularity to this part of the city. A customer tide from different cities and outside of Algeria comes daily to visit this district. The other zones surrounding have underwent the same change and have followed the model of 'Doubaï' place. Consequently, the mechanical movement has finished by stifling an important part of the city and the prices of land and real estate have reached exorbitant values and can be compared to prices charged in Paris due to the rampant speculation that has reached alarming dimensions. Similarly, renting commercial premises did not escape this logic. This paper will explain the reasons responsible of this change, the logic of importers through their acts in different spaces of the city.

Keywords: Doubaï place, design of urban spaces, fake importers, informal trade

Procedia PDF Downloads 399
639 Using Motives of Sports Consumption to Explain Team Identity: A Comparison between Football Fans across the Pond

Authors: G. Scremin, I. Y. Suh, S. Doukas

Abstract:

Spectators follow their favorite sports teams for different reasons. While some attend a sporting event simply for its entertainment value, others do so because of the personal sense of achievement and accomplishment their connection with a sports team creates. Moreover, the level of identity spectators feel toward their favorite sports team falls in a broad continuum. Some are mere spectators. For those spectators, their association to a sports team has little impact on their self-image. Others are die-hard fans who are proud of their association with their team and whose connection with that team is an important reflection of who they are. Several motives for sports consumption can be used to explain the level of spectator support in a variety of sports. Those motives can also be used to explain the variance in the identification, attachment, and loyalty spectators feel toward their favorite sports team. Motives for sports consumption can be used to discriminate the degree of identification spectators have with their favorite sports team. In this study, motives for sports consumption was used to discriminate the level of identity spectators feel toward their sports team. It was hypothesized that spectators with a strong level of team identity would report higher rates of interest in player, interest in sports, and interest in team than spectators with a low level of team identity. And spectators with a low level of team identity would report higher rates for entertainment value, bonding with friends or family, and wholesome environment. Football spectators in the United States and England were surveyed about their motives for football consumption and their level of identification with their favorite football team. To assess if the motives of sports fans differed by level of team identity and allegiance to an American or English football team, a Multivariate Analysis of Variance (MANOVA) under the General Linear Model (GLM) procedure found in SPSS was performed. The independent variables were level of team identity and allegiance to an American or English football team, and the dependent variables were the sport fan motives. A tripartite split (low, moderate, high) was used on a composite measure for team identity. Preliminary results show that effect of team identity is statistically significant (p < .001) for at least nine of the 17 motives for sports consumption assessed in this investigation. These results indicate that the motives of spectators with a strong level of team identity differ significantly from spectators with a low level of team identity. Those differences can be used to discriminate the degree of identification spectators have with their favorite sports team. Sports marketers can use these methods and results to develop identity profiles of spectators and create marketing strategies specifically designed to attract those spectators based on their unique motives for consumption and their level of team identification.

Keywords: fan identification, market segmentation of sports fans, motives for sports consumption, team identity

Procedia PDF Downloads 153
638 The Effect of Affirmative Action in Private Schools on Education Expenditure in India: A Quasi-Experimental Approach

Authors: Athira Vinod

Abstract:

Under the Right to Education Act (2009), the Indian government introduced an affirmative action policy aimed at the reservation of seats in private schools at the entry-level and free primary education for children from lower socio-economic backgrounds. Using exogenous variation in the status of being in a lower social category (disadvantaged groups) and the year of starting school, this study investigates the effect of exposure to the policy on the expenditure on private education. It employs a difference-in-difference strategy with the help of repeated cross-sectional household data from the National Sample Survey (NSS) of India. It also exploits regional variation in exposure by combining the household data with administrative data on schools from the District Information System for Education (DISE). The study compares the outcome across two age cohorts of disadvantaged groups, starting school at different times, that is, before and after the policy. Regional variation in exposure is proxied with a measure of enrolment rate under the policy, calculated at the district level. The study finds that exposure to the policy led to an average reduction in annual private school fees of ₹223. Similarly, a 5% increase in the rate of enrolment under the policy in a district was associated with a reduction in annual private school fees of ₹240. Furthermore, there was a larger effect of the policy among households with a higher demand for private education. However, the effect is not due to fees waived through direct enrolment under the policy but rather an increase in the supply of low-fee private schools in India. The study finds that after the policy, 79,870 more private schools entered the market due to an increased demand for private education. The new schools, on average, charged a lower fee than existing schools and had a higher enrolment of children exposed to the policy. Additionally, the district-level variation in the enrolment under the policy was very strongly correlated with the entry of new schools, which not only charged a low fee but also had a higher enrolment under the policy. Results suggest that few disadvantaged children were admitted directly under the policy, but many were attending private schools, which were largely low-fee. This implies that disadvantaged households were willing to pay a lower fee to secure a place in a private school even if they did not receive a free place under the policy.

Keywords: affirmative action, disadvantaged groups, private schools, right to education act, school fees

Procedia PDF Downloads 98
637 The Rational Mode of Affordable Housing Based on the Special Residence Space Form of City Village in Xiamen

Authors: Pingrong Liao

Abstract:

Currently, as China is in the stage of rapid urbanization, a large number of rural population have flown into the city and it is urgent to solve the housing problem. Xiamen is the typical city of China characterized by high housing price and low-income. Due to the government failed to provide adequate public cheap housing, a large number of immigrants dwell in the informal rental housing represented by the "city village". Comfortable housing is the prerequisite for the harmony and stability of the city. Therefore, with "city village" and the affordable housing as the main object of study, this paper makes an analysis on the housing status, personnel distribution and mobility of the "city village" of Xiamen, and also carries out a primary research on basic facilities such as the residential form and commercial, property management services, with the combination of the existing status of the affordable housing in Xiamen, and finally summary and comparison are made by the author in an attempt to provide some references and experience for the construction and improvement of the government-subsidized housing to improve the residential quality of the urban-poverty stricken people. In this paper, the data and results are collated and quantified objectively based on the relevant literature, the latest market data and practical investigation as well as research methods of comparative study and case analysis. Informal rental housing, informal economy and informal management of "city village" as social-housing units in many ways fit in the housing needs of the floating population, providing a convenient and efficient condition for the flowing of people. However, the existing urban housing in Xiamen have some drawbacks, for example, the housing are unevenly distributed, the spatial form is single, the allocation standard of public service facilities is not targeted to the subsidized object, the property management system is imperfect and the cost is too high, therefore, this paper draws lessons from the informal model of city village”, and finally puts forward some improvement strategies.

Keywords: urban problem, urban village, affordable housing, living mode, Xiamen constructing

Procedia PDF Downloads 230
636 Microbiota Associated With the Larval Culture of Red Cusk Eel Genipterus Chilensis in Chile

Authors: Luz Hurtado, Rodrigo Rojas, Jaime Romero, Christopher Concha

Abstract:

The culture of the marine fish red cusk eel Genypterus chilensis is currently considered a priority for Chilean aquaculture which is a Chilean native species of high gastronomic demand and market value. The microbiota was analyzed in terms of diversity and structure using massive Illumina sequencing. The analysis of alpha diversity was performed in samples of G. chilensis larvae of 6, 18 and 32 dph (days post-hatching) and it was observed that there were significant differences (P = 0.05) between the days of culture for the Chao1 index, being the larvae of 18 dph the one with the highest index followed by the larvae of 6 dph, The lowest value for this index was presented in larvae of 32 dph. There were no significant differences in larvae between the days of culture for the Shannon (P=0.0857) and Simpson (P=0.0714) indices. In general, the larvae of G. chilensis have high rates of diversity. When analyzing the beta diversity, a differentiation between the bacterial communities is observed depending on the day of the culture of the larvae. Considering the PCoA elaborated from the unweighted UniFrac statistic, the explained variance was 46.2% (PC1 29.2% and PC2 17.0%) and in the case of the PCoA elaborated with the weighted UniFrac statistic; the explained variance was 65.5% (PC1 41.8% and PC2 23.7%) these differences were significant based on the Permanova statistical analysis (P= 0.002 and 0.037 respectively). When analyzing the taxonomic composition of the microbiota of the larvae in the different days of culture it was observed that at the phyla level the most abundant in the larvae of 6 dph were Proteobacteria (57%) Verrucomicrobia (24%) and Firmicutes (14%), for the larvae of 18 dph the predominant phyla were Proteobacteria (90%), Dependientiae (5%), Actinobacteria (2%) and Plactomyces (2%), for the larvae of 32 dph the phyla that presented the highest relative abundance were Proteobacteria (57%), Firmicutes (29%), Verrucomicrobia (5%) and Actinobacteria (5%), when comparing the larvae between the days it was observed that the phylum Proteobacteria was the most abundant in the samples of larvae of 6, 18 and 32 dph being the larvae of 18 dph those that present the highest relative abundance, the larvae of 6 dph were those that presented the highest relative abundance for the phylum Verrucomicrobia and in the larvae of 32 dph was observed greater abundance of the phylum Firmicutes compared to the other days of larval culture. At the level of genera, those with the highest relative abundance in larvae of 6 dph were Rubritalea (30%), Psychrobacter (28%), staphylococcus (17%) and Ralstonia (10%), for the larvae of 18 dph the genera with the highest abundance were Psychrobacter (47%), Litoreibacter (13%), Nautella (9%) and Cohesibacter (8%), for the larvae of 32 dph the most abundant genera were Alloiococcus (25%), Dialister (14%), Neptunomonas (13%) and Piscirickettsia (11%). When observing the taxonomic composition of the larvae between the days of larval culture, it is observed that there are differences between them.

Keywords: microbiota, diversity, G. Chilensis, larvae

Procedia PDF Downloads 59
635 A Study on the Relationship Between Adult Videogaming and Wellbeing, Health, and Labor Supply

Authors: William Marquis, Fang Dong

Abstract:

There has been a growing concern in recent years over the economic and social effects of adult video gaming. It has been estimated that the number of people who played video games during the COVID-19 pandemic is close to three billion, and there is evidence that this form of entertainment is here to stay. Many people are concerned that this growing use of time could crowd out time that could be spent on alternative forms of entertainment with family, friends, sports, and other social activities that build community. For example, recent studies of children suggest that playing videogames crowds out time that could be spent on homework, watching TV, or in other social activities. Similar studies of adults have shown that video gaming is negatively associated with earnings, time spent at work, and socializing with others. The primary objective of this paper is to examine how time adults spend on video gaming could displace time they could spend working and on activities that enhance their health and well-being. We use data from the American Time Use Survey (ATUS), maintained by the Bureau of Labor Statistics, to analyze the effects of time-use decisions on three measures of well-being. We pool the ATUS Well-being Module for multiple years, 2010, 2012, 2013, and 2021, along with the ATUS Activity and Who files for these years. This pooled data set provides three broad measures of well-being, e.g., health, life satisfaction, and emotional well-being. Seven variants of each are used as a dependent variable in different multivariate regressions. We add to the existing literature in the following ways. First, we investigate whether the time adults spend in video gaming crowds out time spent working or in social activities that promote health and life satisfaction. Second, we investigate the relationship between adult gaming and their emotional well-being, also known as negative or positive affect, a factor that is related to depression, health, and labor market productivity. The results of this study suggest that the time adult gamers spend on video gaming has no effect on their supply of labor, a negligible effect on their time spent socializing and studying, and mixed effects on their emotional well-being, such as increasing feelings of pain and reducing feelings of happiness and stress.

Keywords: online gaming, health, social capital, emotional wellbeing

Procedia PDF Downloads 26
634 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule

Authors: Yuanxiaoyue Yang

Abstract:

Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.

Keywords: air pollution, cap-and-trade, emissions trading, environmental justice

Procedia PDF Downloads 128
633 Development of Antioxidant Rich Bakery Products by Applying Lysine and Maillard Reaction Products

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

Due to the rapidly growing number of conscious customers in the recent years, more and more people look for products with positive physiological effects which may contribute to the preservation of their health. In response to these demands Food Science Research Institute of Budapest develops and introduces into the market new functional foods of guaranteed positive effect that contain bioactive agents. New, efficient technologies are also elaborated in order to preserve the maximum biological effect of the produced foods. The main objective of our work was the development of new functional biscuits fortified with physiologically beneficial ingredients. Bakery products constitute the base of the food nutrients’ pyramid, thus they might be regarded as foodstuffs of the largest consumed quantity. In addition to the well-known and certified physiological benefits of lysine, as an essential amino acid, a series of antioxidant type compounds is formed as a consequence of the occurring Maillard-reaction. Progress of the evoked Maillard-reaction was studied by applying diverse sugars (glucose, fructose, saccharose, isosugar) and lysine at several temperatures (120-170°C). Interval of thermal treatment was also varied (10-30 min). The composition and production technologies were tailored in order to reach the maximum of the possible biological benefits, so as to the highest antioxidant capacity in the biscuits. Out of the examined sugar components, theextent of the Maillard-reaction-driven transformation of glucose was the most pronounced at both applied temperatures. For the precise assessment of the antioxidant activity of the products FRAP and DPPH methods were adapted and optimised. To acquire an authentic and extensive mechanism of the occurring transformations, Maillard-reaction products were identified, and relevant reaction pathways were revealed. GC-MS and HPLC-MS techniques were applied for the analysis of the 60 generated MRPs and characterisation of actual transformation processes. 3 plausible major transformation routes might have been suggested based on the analytical result and the deductive sequence of possible occurring conversions between lysine and the sugars.

Keywords: Maillard-reaction, lysine, antioxidant activity, GC-MS and HPLC-MS techniques

Procedia PDF Downloads 466
632 Photochemical Behaviour of Carbamazepine in Natural Waters

Authors: Fanny Desbiolles, Laure Malleret, Isabelle Laffont-Schwob, Christophe Tiliacos, Anne Piram, Mohamed Sarakha, Pascal Wong-Wah-Chung

Abstract:

Pharmaceuticals in the environment have become a very hot topic in the recent years. This interest is related to the large amounts dispensed and to their release in urine or faeces from treated patients, resulting in their ubiquitous presence in water resources and wastewater treatment plants (WWTP) effluents. Thereby, many studies focused on the prediction of pharmaceuticals’ behaviour, to assess their fate and impacts in the environment. Carbamazepine is a widely consumed psychotropic pharmaceutical, thus being one of the most commonly detected drugs in the environment. This organic pollutant was proved to be persistent, especially with respect to its non-biodegradability, rendering it recalcitrant to usual biological treatment processes. Consequently, carbamazepine is very little removed in WWTP with a maximum abatement rate of 5 % and is then often released in natural surface waters. To better assess the environmental fate of carbamazepine in aqueous media, its photochemical transformation was undertaken in four natural waters (two French rivers, the Berre salt lagoon, Mediterranean Sea water) representative of coastal and inland water types. Kinetic experiments were performed in the presence of light using simulated solar irradiation (Xe lamp 300W). Formation of short-lifetime species was highlighted using chemical trap and laser flash photolysis (nanosecond). Identification of transformation by-products was assessed by LC-QToF-MS analyses. Carbamazepine degradation was observed after a four-day exposure and an abatement of 20% maximum was measured yielding to the formation of many by-products. Moreover, the formation of hydroxyl radicals (•OH) was evidenced in waters using terephthalic acid as a probe, considering the photochemical instability of its specific hydroxylated derivative. Correlations were implemented using carbamazepine degradation rate, estimated hydroxyl radical formation and chemical contents of waters. In addition, laser flash photolysis studies confirmed •OH formation and allowed to evidence other reactive species, such as chloride (Cl2•-)/bromine (Br2•-) and carbonate (CO3•-) radicals in natural waters. Radicals mainly originate from dissolved phase and their occurrence and abundance depend on the type of water. Rate constants between reactive species and carbamazepine were determined by laser flash photolysis and competitive reactions experiments. Moreover, LC-QToF-MS analyses of by-products help us to propose mechanistic pathways. The results will bring insights to the fate of carbamazepine in various water types and could help to evaluate more precisely potential ecotoxicological effects.

Keywords: carbamazepine, kinetic and mechanistic approaches, natural waters, photodegradation

Procedia PDF Downloads 363
631 Consumption of Fat Burners Leads to Acute Liver Failure: A Systematic Review protocol

Authors: Anjana Aggarwal, Sheilja Walia

Abstract:

Prevalence of obesity and overweight is increasing due to sedentary lifestyles and busy schedules of people that spend less time on physical exercise. To reduce weight, people are finding easier and more convenient ways. The easiest solution is the use of dietary supplements and fat burners. These are products that decrease body weight by increasing the basal metabolic rate. Various reports have been published on the consumption of fat burners leading to heart palpitations, seizures, anxiety, depression, psychosis, bradycardia, insomnia, muscle contractions, hepatotoxicity, and even liver failure. Case reports and series are reporting that the ingredients present in the fat burners caused acute liver failure (ALF) and hepatic toxicity in many cases. Another contributing factor is the absence of regulations from the Food and Drug Administration on these products, leading to increased consumption and a higher risk of liver diseases among the population. This systematic review aims to attain a better understanding of the dietary supplements used globally to reduce weight and document the case reports/series of acute liver failure caused by the consumption of fat burners. Electronic databases like PubMed, Cochrane, Google Scholar, etc., will be systematically searched for relevant articles. Various websites of dietary products and brands that sell such supplements, Journals of Hepatology, National and international projects launched for ALF, and their reports, along with the review of grey literature, will also be done to get a better understanding of the topic. After discussing with the co-author, the selection and screening of the articles will be performed by the author. The studies will be selected based on the predefined inclusion and exclusion criteria. The case reports and case series that will be included in the final list of the studies will be assessed for methodological quality using the CARE guidelines. The results from this study will provide insights and a better understanding of fat burners. Since the supplements are easily available in the market without any restrictions on their sale, people are unaware of their adverse effects. The consumption of these supplements causes acute liver failure. Thus, this review will provide a platform for future larger studies to be conducted.

Keywords: acute liver failure, dietary supplements, fat burners, weight loss supplements

Procedia PDF Downloads 65
630 Development of the Integrated Quality Management System of Cooked Sausage Products

Authors: Liubov Lutsyshyn, Yaroslava Zhukova

Abstract:

Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».

Keywords: cooked sausage products, HACCP, quality management, safety assurance

Procedia PDF Downloads 234
629 Explaining Irregularity in Music by Entropy and Information Content

Authors: Lorena Mihelac, Janez Povh

Abstract:

In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.

Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM

Procedia PDF Downloads 116
628 Isolation and Identification of Salmonella spp and Salmonella enteritidis, from Distributed Chicken Samples in the Tehran Province using Culture and PCR Techniques

Authors: Seyedeh Banafsheh Bagheri Marzouni, Sona Rostampour Yasouri

Abstract:

Salmonella is one of the most important common pathogens between humans and animals worldwide. Globally, the prevalence of the disease in humans is due to the consumption of food contaminated with animal-derived Salmonella. These foods include eggs, red meat, chicken, and milk. Contamination of chicken and its products with Salmonella may occur at any stage of the chicken processing chain. Salmonella infection is usually not fatal. However, its occurrence is considered dangerous in some individuals, such as infants, children, the elderly, pregnant women, or individuals with weakened immune systems. If Salmonella infection enters the bloodstream, the possibility of contamination of tissues throughout the body will arise. Therefore, determining the potential risk of Salmonella at various stages is essential from the perspective of consumers and public health. The aim of this study is to isolate and identify Salmonella from chicken samples distributed in the Tehran market using the Gold standard culture method and PCR techniques based on specific genes, invA and ent. During the years 2022-2023, sampling was performed using swabs from the liver and intestinal contents of distributed chickens in the Tehran province, with a total of 120 samples taken under aseptic conditions. The samples were initially enriched in buffered peptone water (BPW) for pre-enrichment overnight. Then, the samples were incubated in selective enrichment media, including TT broth and RVS medium, at temperatures of 37°C and 42°C, respectively, for 18 to 24 hours. Organisms that grew in the liquid medium and produced turbidity were transferred to selective media (XLD and BGA) and incubated overnight at 37°C for isolation. Suspicious Salmonella colonies were selected for DNA extraction, and PCR technique was performed using specific primers that targeted the invA and ent genes in Salmonella. The results indicated that 94 samples were Salmonella using the PCR technique. Of these, 71 samples were positive based on the invA gene, and 23 samples were positive based on the ent gene. Although the culture technique is the Gold standard, PCR is a faster and more accurate method. Rapid detection through PCR can enable the identification of Salmonella contamination in food items and the implementation of necessary measures for disease control and prevention.

Keywords: culture, PCR, salmonella spp, salmonella enteritidis

Procedia PDF Downloads 48
627 Ecology, Value-Form and Metabolic Rift: Conceptualizing the Environmental History of the Amazon in the Capitalist World-System (19th-20th centuries)

Authors: Santiago Silva de Andrade

Abstract:

In recent decades, Marx's ecological theory of the value-form and the theory of metabolic rift have represented fundamental methodological innovations for social scientists interested in environmental transformations and their relationships with the development of the capital system. However, among Latin American environmental historians, such theoretical and methodological instruments have been used infrequently and very cautiously. This investigation aims to demonstrate how the concepts of metabolic rift and ecological value-form are important for understanding the environmental, economic and social transformations in the Amazon region between the second half of the 19th century and the end of the 20th century. Such transformations manifested themselves mainly in two dimensions: the first concerns the link between the manufacture of tropical substances for export and scientific developments in the fields of botany, chemistry and agriculture. This link was constituted as a set of social, intellectual and economic relations that condition each other, configuring an asymmetrical field of exchanges and connections between the demands of the industrialized world - personified in scientists, naturalists, businesspeople and bureaucrats - and the agencies of local social actors, such as indigenous people, riverside dwellers and quilombolas; the second dimension concerns the imperative link between the historical development of the capitalist world-system and the restructuring of the natural world, its landscapes, biomes and social relations, notably in peripheral colonial areas. The environmental effects of capitalist globalization were not only seen in the degradation of exploited environments, although this has been, until today, its most immediate and noticeable aspect. There was also, in territories subject to the logic of market accumulation, the reformulation of patterns of authority and institutional architectures, such as property systems, political jurisdictions, rights and social contracts, as a result of the expansion of commodity frontiers between the 16th and 21st centuries. . This entire set of transformations produced impacts on the ecological landscape of the Amazon. This demonstrates the need to investigate the histories of local configurations of power, spatial and ecological - with their institutions and social actors - and their role in structuring the capitalist world-system , under the lens of the ecological theory of value-form and metabolic rift.

Keywords: amazon, ecology, form-value, metabolic rift

Procedia PDF Downloads 46
626 Online Augmented Reality Mathematics Application

Authors: Farhaz Amyn Rajabali, Collins Odour

Abstract:

Mathematics has been there for over 4000 years and has been one of the very first educational topics explored by human civilization. Throughout the years, it has become a complex study and has derived so many other subjects. With advancements in ICT, most of the computation in mathematics is done using powerful computers. In many different countries, the children in primary and secondary schools face difficulties in learning mathematics, and this has many reasons behind it, one being the students don’t engage much with the mathematical concepts hence failing to understand them deeply. The objective of this system is to help the students understand this mathematical concept interactively, which in return will encourage the love for learning and increase thorough understanding of many concepts. Research was conducted among a group of samples and about 50% of respondents replied that they had never used an augmented reality application before. This means that the chances for this system to be accepted in the market are high due to its innovative idea. Around 60% of people did recommend the use of this system to learn mathematics. The study also showed several challenges in an educational system, including but not limited to lack of resources which was chosen by 30% of respondents, the challenge to read from textbooks (34.6%) and how hard it is to visualize concepts (46.2%). The survey question asked what benefits the users see using augmented reality to learn mathematics. The responses that were picked the most were increased student engagement and using real-world examples to understand concepts, both being 65.4% and followed by easy access to learning material at 61.5%, and increased knowledge retention at 50%. This shows that there are plenty of issues with an education system that can be addressed by software applications; now that the newer generation is so enthusiastic about electronic devices, it can actually be used to deliver good knowledge and skills to the upcoming students and mitigate most of the challenges faced currently. The study concludes that the implementation of the system is a best practice for the educational system especially leveraging a new technology that has the ability to attract the attention of many young students and use it to deliver information. It will also give rise to awareness of new technology and on multiple ways it can be implemented. Addressing the educational sector in developing countries using information technology is an imperative task since these kids studying now is the future of the country and will use what they learn and understand during their childhood will help them to make decisions about their lives in the future which will not only affect them personally but also affect the whole society in general.

Keywords: AR, mathematics, system development, augmented reality

Procedia PDF Downloads 78
625 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 186
624 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 39
623 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 23
622 A Study of Smartphone Engagement Patterns of Millennial in India

Authors: Divyani Redhu, Manisha Rathaur

Abstract:

India has emerged as a very lucrative market for the smartphones in a very short span of time. The number of smartphone users here is growing massively with each passing day. Also, the expansion of internet services to far corners of the nation has also given a push to the smartphone revolution in India. Millennial, also known as Generation Y or the Net Generation is the generation born between the early 1980s and mid-1990s (some definitions extending further to early 2000s). Spanning roughly over 15 years, different social classes, cultures, and continents; it is irrational to imagine that millennial have a unified identity. But still, it cannot be denied that the growing millennial population is not only young but is highly tech-savvy too. It is not just the appearance of the device that today; we call it ‘smart’. Rather, it is the numerous tasks and functions that it can perform which has led its name to evolve as that of a ‘smartphone’. From usual tasks that were earlier performed by a simple mobile phone like making calls, sending messages, clicking photographs, recording videos etc.; today, the time has come where most of our day – to – day tasks are being taken care of by our all-time companion, i.e. smartphones. From being our alarm clock to being our note-maker, from our watch to our radio, our book-reader to our reminder, smartphones are present everywhere. Smartphone has now become an essential device for particularly the millennial to communicate not only with their friends but also with their family, colleagues, and teachers. The study by the researchers would be quantitative in nature. For the same, a survey would be conducted in particularly the capital of India, i.e. Delhi and the National Capital Region (NCR), which is the metropolitan area covering the entire National Capital Territory of Delhi and urban areas covering states of Haryana, Uttarakhand, Uttar Pradesh and Rajasthan. The tool of the survey would be a questionnaire and the number of respondents would be 200. The results derived from the study would primarily focus on the increasing reach of smartphones in India, smartphones as technological innovation and convergent tools, smartphone usage pattern of millennial in India, most used applications by the millennial, the average time spent by them, the impact of smartphones on the personal interactions of millennial etc. Thus, talking about the smartphone technology and the millennial in India, it would not be wrong to say that the growth, as well as the potential of the smartphones in India, is still immense. Also, very few technologies have made it possible to give a global exposure to the users and smartphone, if not the only one is certainly an immensely effective one that comes to the mind in this case.

Keywords: Delhi – NCR, India, millennial, smartphone

Procedia PDF Downloads 125
621 Dividend Payout and Capital Structure: A Family Firm Perspective

Authors: Abhinav Kumar Rajverma, Arun Kumar Misra, Abhijeet Chandra

Abstract:

Family involvement in business is universal across countries, with varying characteristics. Firms of developed economies have diffused ownership structure; however, that of emerging markets have concentrated ownership structure, having resemblance with that of family firms. Optimization of dividend payout and leverage are very crucial for firm’s valuation. This paper studies dividend paying behavior of National Stock Exchange listed Indian firms from financial year 2007 to 2016. The final sample consists of 422 firms and of these more than 49% (207) are family firms. Results reveal that family firms pay lower dividend and are more leveraged compared to non-family firms. This unique data set helps to understand dividend behavior and capital structure of sample firms over a long-time period and across varying family ownership concentration. Using panel regression models, this paper examines factors affecting dividend payout and capital structure and establishes a link between the two using Two-stage Least Squares regression model. Profitability shows a positive impact on dividend and negative impact on leverage, confirming signaling and pecking order theory. Further, findings support bankruptcy theory as firm size has a positive relation with dividend and leverage and volatility shows a negative relation with both dividend and leverage. Findings are also consistent with agency theory, family ownership concentration has negative relation with both dividend payments and leverage. Further, the impact of family ownership control confirms the similar finding. The study further reveals that firms with high family ownership concentration (family control) do have an impact on determining the level of private benefits. Institutional ownership is not significant for dividend payments. However, it shows significant negative relation with leverage for both family and non-family firms. Dividend payout and leverage show mixed association with each other. This paper provides evidence of how varying level of family ownership concentration and ownership control influences the dividend policy and capital structure of firms in an emerging market like India and it can have significant contribution towards understanding and formulating corporate dividend policy decisions and capital structure for emerging economies, where majority of firms exhibit behavior of family firm.

Keywords: dividend, family firms, leverage, ownership structure

Procedia PDF Downloads 263
620 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 67
619 Computational Investigation on Structural and Functional Impact of Oncogenes and Tumor Suppressor Genes on Cancer

Authors: Abdoulie K. Ceesay

Abstract:

Within the sequence of the whole genome, it is known that 99.9% of the human genome is similar, whilst our difference lies in just 0.1%. Among these minor dissimilarities, the most common type of genetic variations that occurs in a population is SNP, which arises due to nucleotide substitution in a protein sequence that leads to protein destabilization, alteration in dynamics, and other physio-chemical properties’ distortions. While causing variations, they are equally responsible for our difference in the way we respond to a treatment or a disease, including various cancer types. There are two types of SNPs; synonymous single nucleotide polymorphism (sSNP) and non-synonymous single nucleotide polymorphism (nsSNP). sSNP occur in the gene coding region without causing a change in the encoded amino acid, while nsSNP is deleterious due to its replacement of a nucleotide residue in the gene sequence that results in a change in the encoded amino acid. Predicting the effects of cancer related nsSNPs on protein stability, function, and dynamics is important due to the significance of phenotype-genotype association of cancer. In this thesis, Data of 5 oncogenes (ONGs) (AKT1, ALK, ERBB2, KRAS, BRAF) and 5 tumor suppressor genes (TSGs) (ESR1, CASP8, TET2, PALB2, PTEN) were retrieved from ClinVar. Five common in silico tools; Polyphen, Provean, Mutation Assessor, Suspect, and FATHMM, were used to predict and categorize nsSNPs as deleterious, benign, or neutral. To understand the impact of each variation on the phenotype, Maestro, PremPS, Cupsat, and mCSM-NA in silico structural prediction tools were used. This study comprises of in-depth analysis of 10 cancer gene variants downloaded from Clinvar. Various analysis of the genes was conducted to derive a meaningful conclusion from the data. Research done indicated that pathogenic variants are more common among ONGs. Our research also shows that pathogenic and destabilizing variants are more common among ONGs than TSGs. Moreover, our data indicated that ALK(409) and BRAF(86) has higher benign count among ONGs; whilst among TSGs, PALB2(1308) and PTEN(318) genes have higher benign counts. Looking at the individual cancer genes predisposition or frequencies of causing cancer according to our research data, KRAS(76%), BRAF(55%), and ERBB2(36%) among ONGs; and PTEN(29%) and ESR1(17%) among TSGs have higher tendencies of causing cancer. Obtained results can shed light to the future research in order to pave new frontiers in cancer therapies.

Keywords: tumor suppressor genes (TSGs), oncogenes (ONGs), non synonymous single nucleotide polymorphism (nsSNP), single nucleotide polymorphism (SNP)

Procedia PDF Downloads 75
618 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship

Authors: Paloma Mendes Saldanha

Abstract:

Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.

Keywords: artificial intelligence, ethics, citizenship, trust

Procedia PDF Downloads 47