Search results for: prediction primary user
1657 Impact of Green Bonds Issuance on Stock Prices: An Event Study on Respective Indian Companies
Authors: S. L. Tulasi Devi, Shivam Azad
Abstract:
The primary objective of this study is to analyze the impact of green bond issuance on the stock prices of respective Indian companies. An event study methodology has been employed to study the effect of green bond issuance. For in-depth study and analysis, this paper used different window frames, including 15-15 days, 10-10 days, 7-7days, 6-6 days, and 5-5 days. Further, for better clarity, this paper also used an uneven window period of 7-5 days. The period of study covered all the companies which issued green bonds during the period of 2017-2022; Adani Green Energy, State Bank of India, Power Finance Corporation, Jain Irrigation, and Rural Electrification Corporation, except Indian Renewable Energy Development Agency and Indian Railway Finance Corporation, because of data unavailability. The paper used all three event study methods as discussed in earlier literature; 1) constant return model, 2) market-adjusted model, and 3) capital asset pricing model. For the fruitful comparison between results, the study considered cumulative average return (CAR) and buy and hold average return (BHAR) methodology. For checking the statistical significance, a two-tailed t-statistic has been used. All the statistical calculations have been performed in Microsoft Excel 2016. The study found that all other companies have shown positive returns on the event day except for the State Bank of India. The results demonstrated that constant return model outperformed compared to the market-adjusted model and CAPM. The p-value derived from all the methods has shown an almost insignificant impact of the issuance of green bonds on the stock prices of respective companies. The overall analysis states that there’s not much improvement in the market efficiency of the Indian Stock Markets.Keywords: green bonds, event study methodology, constant return model, market-adjusted model, CAPM
Procedia PDF Downloads 951656 Beliefs about the Use of Extemporaneous Compounding for Paediatric Outpatients among Physicians in Yogyakarta, Indonesia
Authors: Chairun Wiedyaningsih, Sri Suryawati, Yati Soenarto, Muhammad Hakimi
Abstract:
Background: Many drugs used in paediatrics are not commercially available in suitable dosage forms. Therefore, the drugs often prescribed in extemporaneous compounding dosage form. Compounding can pose health risks include poor quality and unsafe products. Studies of compounding dosage form have primarily focused on prescription profiles, reasons of prescribing never be explored. Objectives: The study was conducted to identify factors influencing physicians’ decision to prescribe extemporaneous compounding dosage form for paediatric outpatients. Setting: Daerah Istimewa Yogyakarta (DIY) province, Indonesia. Method: Qualitative semi-structured interviews were conducted with 15 general physicians and 7 paediatricians to identify the reason of prescribing extemporaneous compounding dosage form. The interviews were transcribed and analysed using thematic analysis. Results: Factors underlying prescribing of compounding could be categorized to therapy, healthcare system, patient and past experience. The primary reasons of therapy factors were limited availability of drug compositions, dosages or formulas specific for children. Beliefs in efficacy of the compounding forms were higher when the drugs used primarily to overcome complex cases. Physicians did not concern about compounding form containing several active substances because manufactured syrups may also contain several active substances. Although medicines were available in manufactured syrups, limited institutional budget was healthcare system factor of compounding prescribing. The prescribing factors related to patients include easy to use, efficient and lower price. The prescribing factors related to past experience were physicians’ beliefs to the progress of patient's health status. Conclusions: Compounding was prescribed based on therapy-related factors, healthcare system factors, patient factors and past experience.Keywords: compounding dosage form, interview, physician, prescription
Procedia PDF Downloads 4261655 The Impact of Information and Communications Technology (ICT)-Enabled Service Adaptation on Quality of Life: Insights from Taiwan
Authors: Chiahsu Yang, Peiling Wu, Ted Ho
Abstract:
From emphasizing economic development to stressing public happiness, the international community mainly hopes to be able to understand whether the quality of life for the public is becoming better. The Better Life Index (BLI) constructed by OECD uses living conditions and quality of life as starting points to cover 11 areas of life and to convey the state of the general public’s well-being. In light of the BLI framework, the Directorate General of Budget, Accounting and Statistics (DGBAS) of the Executive Yuan instituted the Gross National Happiness Index to understand the needs of the general public and to measure the progress of the aforementioned conditions in residents across the island. Whereas living conditions consist of income and wealth, jobs and earnings, and housing conditions, health status, work and life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security. The ICT area consists of health care, living environment, ICT-enabled communication, transportation, government, education, pleasure, purchasing, job & employment. In the wake of further science and technology development, rapid formation of information societies, and closer integration between lifestyles and information societies, the public’s well-being within information societies has indeed become a noteworthy topic. the Board of Science and Technology of the Executive Yuan use the OECD’s BLI as a reference in the establishment of the Taiwan-specific ICT-Enabled Better Life Index. Using this index, the government plans to examine whether the public’s quality of life is improving as well as measure the public’s satisfaction with current digital quality of life. This understanding will enable the government to gauge the degree of influence and impact that each dimension of digital services has on digital life happiness while also serving as an important reference for promoting digital service development. The content of the ICT Enabled Better Life Index. Information and communications technology (ICT) has been affecting people’s living styles, and further impact people’s quality of life (QoL). Even studies have shown that ICT access and usage have both positive and negative impact on life satisfaction and well-beings, many governments continue to invest in e-government programs to initiate their path to information society. This research is the few attempts to link the e-government benchmark to the subjective well-being perception, and further address the gap between user’s perception and existing hard data assessment, then propose a model to trace measurement results back to the original public policy in order for policy makers to justify their future proposals.Keywords: information and communications technology, quality of life, satisfaction, well-being
Procedia PDF Downloads 3541654 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 2461653 Psychological Stressors Caused by Urban Expansion in Algeria
Authors: Laid Fekih
Abstract:
Background: The purpose of this paper is to examine the psychological stressors caused by urbanization, a field study conducted on a sample range of youth who live in urban areas. Some of them reside in areas with green surroundings while others reside in lack of green areas, which saw the terrible expansion of urban. The study included the impact of urbanization on the mental health of youths; select the psychological problems most commonly caused by urbanization, and the impact of green spaces in alleviating stress. Method: The method used in this research is descriptive, as the data collected from a sample of 160 young men were analyzed. The tool used is the psychological distress test. We proceeded with some statistical techniques, which provided percentages, analysis of variance, and t-tests. Results: The findings of this research were: (i) The psychological stressors caused by urban expansion are mainly in the intensity of stress, incompetence, emotional, and psychosomatic problems. (ii) There was a statistically significant difference at the level of significance 0.02 among young people who live in places in green spaces and without green space in terms of psychological stressors, in favor of young people who live in places free of greenery. (iii) The quality of this primary variable effect of housing (rental or ownership) is statistically significant in favor of young people living in rented accommodation. Conclusion: The green spaces provided by Tlemcen city are inadequate and insufficient to fulfill the population's requirements for contact with nature, leading to such effects that may negatively affect mental health, which makes it a prominent process that should not be neglected. Incorporating green spaces into the design of buildings, homes, and communities to create shared spaces, which facilitate interaction and foster well-being, becomes the main purpose. We think this approach can support the reconstruction of the built environment with green spaces by facilitating the link between psychological stress perception studies and technologies.Keywords: psychological stressors, urbanization, psychological problems, green spaces
Procedia PDF Downloads 821652 Correlates of Income Generation of Small-Scale Fish Processors in Abeokuta Metropolis, Ogun State, Nigeria
Authors: Ayodeji Motunrayo Omoare
Abstract:
Economically fish provides an important source of food and income for both men and women especially many households in the developing world and fishing has an important social and cultural position in river-rine communities. However, fish is highly susceptible to deterioration. Consequently, this study was carried out to correlate income generation of small-scale women fish processors in Abeokuta metropolis, Ogun State, Nigeria. Eighty small-scale women fish processors were randomly selected from five communities as the sample size for this study. Collected data were analyzed using both descriptive and inferential statistics. The results showed that the mean age of the respondents was 31.75 years with average household size of 4 people while 47.5% of the respondents had primary education. Most (86.3%) of the respondents were married and had spent more than 11 years in fish processing. The respondents were predominantly Yoruba tribe (91.2%). Majority (71.3%) of the respondents used traditional kiln for processing their fish while 23.7% of the respondents used hot vegetable oil to fry their fish. Also, the result revealed that respondents sourced capital from Personal Savings (48.8%), Cooperatives (27.5%), Friends and Family (17.5%) and Microfinance Banks (6.2%) for fish processing activities. The respondents generated an average income of ₦7,000.00 from roasted fish, ₦3,500.00 from dried fish, and ₦5,200.00 from fried fish daily. However, inadequate processing equipment (95.0%), non-availability of credit facility from microfinance banks (85.0%), poor electricity supply (77.5%), inadequate extension service support (70.0%), and fuel scarcity (68.7%) were major constraints to fish processing in the study area. Results of chi-square analysis showed that there was a significant relationship between personal characteristics (χ2 = 36.83, df = 9), processing methods (χ2 = 15.88, df = 3) and income generated at p < 0.05 level of significance. It can be concluded that significant relationship existed between processing methods and income generated. The study, therefore, recommends that modern processing equipment should be made available to the respondents at a subsidized price by the agro-allied companies.Keywords: correlates, income, fish processors, women, small-scale
Procedia PDF Downloads 2441651 Improving the Logistic System to Secure Effective Food Fish Supply Chain in Indonesia
Authors: Atikah Nurhayati, Asep A. Handaka
Abstract:
Indonesia is a world’s major fish producer which can feed not only its citizens but also the people of the world. Currently, the total annual production is 11 tons and expected to double by the year of 2050. Given the potential, fishery has been an important part of the national food security system in Indonesia. Despite such a potential, a big challenge is facing the Indonesians in making fish the reliable source for their food, more specifically source of protein intake. The long geographic distance between the fish production centers and the consumer concentrations has prevented effective supply chain from producers to consumers and therefore demands a good logistic system. This paper is based on our research, which aimed at analyzing the fish supply chain and is to suggest relevant improvement to the chain. The research was conducted in the Year of 2016 in selected locations of Java Island, where intensive transaction on fishery commodities occur. Data used in this research comprises secondary data of time series reports on production and distribution and primary data regarding distribution aspects which were collected through interviews with purposively selected 100 respondents representing fishers, traders and processors. The data were analyzed following the supply chain management framework and processed following logistic regression and validity tests. The main findings of the research are as follows. Firstly, it was found that improperly managed connectivity and logistic chain is the main cause for insecurity of availability and affordability for the consumers. Secondly, lack of quality of most local processed products is a major obstacle for improving affordability and connectivity. The paper concluded with a number of recommended strategies to tackle the problem. These include rationalization of the length of the existing supply chain, intensification of processing activities, and improvement of distribution infrastructure and facilities.Keywords: fishery, food security, logistic, supply chain
Procedia PDF Downloads 2391650 Systematic Review of Current Best Practice in the Diagnosis and Treatment of Obsessive Compulsive Disorder
Authors: Zahra R. Almansoor
Abstract:
Background: Selective serotonin reuptake inhibitors (SSRI’s) and cognitive behavioural therapy (CBT) are the main treatment methods used for patients with obsessive compulsive disorder (OCD) under the National Institute of Health and Care Excellence (NICE) guidelines. Yet many patients are left with residual symptoms or remit, so several other therapeutic approaches have been explored. Objective: The objective was to systematically review the available literature regarding the treatment efficacy of current and potential approaches and diagnostic strategies. Method: First, studies were examined concerning diagnosis, prognosis, and influencing factors. Then, one reviewer conducted a systematic search of six databases using stringent search terms. Results of studies exploring the efficacy of treatment interventions were analysed and compared separately for adults and children. This review was limited to randomised controlled trials (RCT’s) conducted from 2016 onwards, and an improved Y-BOCS (Yale- Brown obsessive compulsive scale) score was the primary outcome measure. Results: Technology-based interventions including internet-based cognitive behavioural therapy (iCBT) were deemed as potentially effective. Discrepancy remains about the benefits of SSRI use past one year, but potential medication adjuncts include amantadine. Treatments such as association splitting and family and mindfulness strategies also have future potential. Conclusion: A range of potential therapies exist, either as treatment adjuncts to current interventions or as sole therapies. To further improve efficacy, it may be necessary to remodel the current NICE stepped-care model, especially regarding the potential use of lower intensity, cheaper treatments, including iCBT. Although many interventions show promise, further research is warranted to confirm this.Keywords: family and group treatment, mindfulness strategies, novel treatment approaches, standard treatment, technology-based interventions
Procedia PDF Downloads 1181649 Comparison of Donor Motivations in National Collegiate Athletic Association Division I vs Division II
Authors: Soojin Kim, Yongjae Kim
Abstract:
Continuous economic downturn and ongoing budget cuts poses higher education with profound challenges which has a direct impact on the collegiate athletic programs. In response to the ever-changing landscape of the fiscal environment, universities seek to boost revenues, resorting to alternative sources of funding. In particular, athletic programs have become increasingly dependent on financial support from their alumni and boosters, which is how athletic departments attempt to offset budget shortfalls and make capital improvements. Although there currently exists three major divisions within National Collegiate Athletic Association (NCAA), the majority of the sport management studies on college sport tend to focus on Division I level. Particularly within the donor motivation literature, a plethora of donor motivation studies exist, but mainly on NCAA Division I athletic programs. Since each athletic department functions differently in a number of different dimensions, while institutional difference can also have a huge impact on athletic donor motivations, the current study attempts to fill this gap that exists in the literature. As such, the purpose of this study was to (I) reexamine the factor structure of the Athletic Donor motivation scale; and (II) identify the prominent athletic donor motives in a NCAA Division II athletic program. For the purpose of this study, a total of 232 actual donors were used for analysis. A confirmatory factor analysis (CFA) was employed to test construct validity, and the reliability of the scale was assessed using Composite Reliability. To identify the prominent motivational factors, the means and standard deviations were examined. Results of this study indicated that Vicarious Achievement, Philanthropy, and Commitment are the three primary motivational factors, while Tangible Benefits, was consistently found as an important motive in prior studies was found low. Such findings highlight the key difference and suggest different salient motivations exist that are specific to the context.Keywords: college athletics, donor, motivation, NCAA
Procedia PDF Downloads 1461648 Elevated Creatinine Clearance and Normal Glomerular Filtration Rate in Patients with Systemic Lupus erythematosus
Authors: Stoyanka Vladeva, Elena Kirilova, Nikola Kirilov
Abstract:
Background: The creatinine clearance is a widely used value to estimate the GFR. Increased creatinine clearance is often called hyperfiltration and is usually seen during pregnancy, patients with diabetes mellitus preceding the diabetic nephropathy. It may also occur with large dietary protein intake or with plasma volume expansion. Renal injury in lupus nephritis is known to affect the glomerular, tubulointerstitial, and vascular compartment. However high creatinine clearance has not been found in patients with SLE, Target: Follow-up of creatinine clearance values in patients with systemic lupus erythematosus without history of kidney injury. Material and methods: We observed the creatinine, creatinine clearance, GFR and dipstick protein values of 7 women (with a mean age of 42.71 years) with systemic lupus erythematosus. Patients with active lupus have been monthly tested in the period of 13 months. Creatinine clearance has been estimated by Cockcroft-Gault Equation formula in ml/sec. GFR has been estimated by MDRD formula (The Modification of Diet in renal Disease) in ml/min/1.73 m2. Proteinuria has been defined as present when dipstick protein > 1+.Results: In all patients without history of kidney injury we found elevated creatinine clearance levels, but GFRremained within the reference range. Two of the patients were in remission while the other five patients had clinically and immunologically active Lupus. Three of the patients had a permanent presence of high creatinine clearance levels and proteinuria. Two of the patients had periodically elevated creatinine clearance without proteinuria. These results show that kidney disturbances may be caused by the vascular changes typical for SLE. Glomerular hyperfiltration can be result of focal segmental glomerulosclerosis caused by a reduction in renal mass. Probably lupus nephropathy is preceded not only by glomerular vascular changes, but also by tubular vascular changes. Using only the GFR is not a sufficient method to detect these primary functional disturbances. Conclusion: For early detection of kidney injury in patients with SLE we determined that the follow up of creatinine clearance values could be helpful.Keywords: systemic Lupus erythematosus, kidney injury, elevated creatinine clearance level, normal glomerular filtration rate
Procedia PDF Downloads 2691647 Experimental Investigation of Nucleate Pool Boiling Heat Transfer Characteristics on Copper Surface with Laser-Textured Stepped Microstructures
Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil
Abstract:
Due to the rapid advancement of integrated circuits and the increasing trend towards miniaturizing electronic devices, the amount of heat produced by electronic devices has consistently exceeded the maximum limit for heat dissipation. Currently, the two-phase cooling technique based on phase change pool boiling heat transfer has received a lot of attention because of its potential to fully utilize the latent heat of the fluid and produce a highly effective heat dissipation capacity while keeping the equipment's operating temperature within an acceptable range. There are numerous strategies available for the alteration of heating surfaces, but to find the best, simplest, and most dependable one remains a challenge. Lately, surface texturing via laser ablation has been used in a variety of investigations, demonstrating its significant potential for enhancing the pool boiling heat transfer performance. In this research, the nucleate pool boiling heat transfer performance of laser-textured copper surfaces of different patterns was investigated. The bare copper surface serves as a reference to compare the performance of laser-structured surfaces. It was observed that the heat transfer coefficients were increased with the increase of surface area ratio and the ratio of the peak-to-valley height of the microstructure. Laser machined grain structure produced extra nucleation sites, which ultimately caused the improved pool boiling performance. Due to an increase in nucleation site density and surface area, the enhanced nucleate boiling served as the primary heat transfer mechanism. The pool boiling performance of the laser-textured copper surfaces is superior to the bare copper surface in all aspects.Keywords: heat transfer coefficient, laser texturing, micro structured surface, pool boiling
Procedia PDF Downloads 871646 Model of Application of Blockchain Technology in Public Finances
Authors: M. Vlahovic
Abstract:
This paper presents a model of public finances, which combines three concepts: participatory budgeting, crowdfunding and blockchain technology. Participatory budgeting is defined as a process in which community members decide how to spend a part of community’s budget. Crowdfunding is a practice of funding a project by collecting small monetary contributions from a large number of people via an Internet platform. Blockchain technology is a distributed ledger that enables efficient and reliable transactions that are secure and transparent. In this hypothetical model, the government or authorities on local/regional level would set up a platform where they would propose public projects to citizens. Citizens would browse through projects and support or vote for those which they consider justified and necessary. In return, they would be entitled to a tax relief in the amount of their monetary contribution. Since the blockchain technology enables tracking of transactions, it can be used to mitigate corruption, money laundering and lack of transparency in public finances. Models of its application have already been created for e-voting, health records or land registries. By presenting a model of application of blockchain technology in public finances, this paper takes into consideration the potential of blockchain technology to disrupt governments and make processes more democratic, secure, transparent and efficient. The framework for this paper consists of multiple streams of research, including key concepts of direct democracy, public finance (especially the voluntary theory of public finance), information and communication technology, especially blockchain technology and crowdfunding. The framework defines rules of the game, basic conditions for the implementation of the model, benefits, potential problems and development perspectives. As an oversimplified map of a new form of public finances, the proposed model identifies primary factors, that influence the possibility of implementation of the model, and that could be tracked, measured and controlled in case of experimentation with the model.Keywords: blockchain technology, distributed ledger, participatory budgeting, crowdfunding, direct democracy, internet platform, e-government, public finance
Procedia PDF Downloads 1481645 Explaining Irregularity in Music by Entropy and Information Content
Authors: Lorena Mihelac, Janez Povh
Abstract:
In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM
Procedia PDF Downloads 1311644 Factors Affecting Visual Environment in Mine Lighting
Authors: N. Lakshmipathy, Ch. S. N. Murthy, M. Aruna
Abstract:
The design of lighting systems for surface mines is not an easy task because of the unique environment and work procedures encountered in the mines. The primary objective of this paper is to identify the major problems encountered in mine lighting application and to provide guidance in the solution of these problems. In the surface mining reflectance of surrounding surfaces is one of the important factors, which improve the vision, in the night hours. But due to typical working nature in the mines it is very difficult to fulfill these requirements, and also the orientation of the light at work site is a challenging task. Due to this reason machine operator and other workers in a mine need to be able to orient themselves in a difficult visual environment. The haul roads always keep on changing to tune with the mining activity. Other critical area such as dumpyards, stackyards etc. also change their phase with time, and it is difficult to illuminate such areas. Mining is a hazardous occupation, with workers exposed to adverse conditions; apart from the need for hard physical labor, there is exposure to stress and environmental pollutants like dust, noise, heat, vibration, poor illumination, radiation, etc. Visibility is restricted when operating load haul dumper and Heavy Earth Moving Machinery (HEMM) vehicles resulting in a number of serious accidents. one of the leading causes of these accidents is the inability of the equipment operator to see clearly people, objects or hazards around the machine. Results indicate blind spots are caused primarily by posts, the back of the operator's cab, and by lights and light brackets. The careful designed and implemented, lighting systems provide mine workers improved visibility and contribute to improved safety, productivity and morale. Properly designed lighting systems can improve visibility and safety during working in the opencast mines.Keywords: contrast, efficacy, illuminance, illumination, light, luminaire, luminance, reflectance, visibility
Procedia PDF Downloads 3581643 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss
Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin
Abstract:
Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links
Procedia PDF Downloads 1261642 Photophysical Study of Pyrene Butyric Acid in Aqueous Ionic Liquid
Authors: Pratap K. Chhotaray, Jitendriya Swain, Ashok Mishra, Ramesh L. Gardas
Abstract:
Ionic liquids (ILs) are molten salts, consist predominantly of ions and found to be liquid below 100°C. The unparalleled growing interest in ILs is based upon their never ending design flexibility. The use of ILs as a co-solvent in binary as well as a ternary mixture with molecular solvents multifold it’s utility. Since polarity is one of the most widely applied solvent concepts which represents simple and straightforward means for characterizing and ranking the solvent media, its study for a binary mixture of ILs is crucial for its widespread application and development. The primary approach to the assessment of solution phase intermolecular interactions, which generally occurs on the picosecond to nanosecond time scales, is to exploit the optical response of photophysical probe. Pyrene butyric acid (PBA) is used as fluorescence probe due to its high quantum yield, longer lifetime and high solvent polarity dependence of fluorescence spectra. Propylammonium formate (PAF) is the IL used for this study. Both the UV-absorbance spectra and steady state fluorescence intensity study of PBA in different concentration of aqueous PAF, reveals that with an increase in PAF concentration, both the absorbance and fluorescence intensity increases which indicate the progressive solubilisation of PBA. Whereas, near about 50% of IL concentration, all of the PBA molecules get solubilised as there are no changes in the absorbance and fluorescence intensity. Furthermore, the ratio II/IV, where the band II corresponds to the transition from S1 (ν = 0) to S0 (ν = 0), and the band IV corresponds to transition from S1 (ν = 0) to S0 (ν = 2) of PBA, indicates that the addition of water into PAF increases the polarity of the medium. Time domain lifetime study shows an increase in lifetime of PBA towards the higher concentration of PAF. It can be attributed to the decrease in non-radiative rate constant at higher PAF concentration as the viscosity is higher. The monoexponential decay suggests that homogeneity of solvation environment whereas the uneven width at full width at half maximum (FWHM) indicates there might exist some heterogeneity around the fluorophores even in the water-IL mixed solvents.Keywords: fluorescence, ionic liquid, lifetime, polarity, pyrene butyric acid
Procedia PDF Downloads 4571641 Agrarian Transitions and Rural Social Relations in Jharkhand, India
Authors: Avinash
Abstract:
Rural Jharkhand has attracted lesser attention in the field of agrarian studies in India, despite more than eighty percent of its rural population being directly dependent on agriculture as their primary source of livelihood. The limited studies on agrarian issues in Jharkhand have focused predominantly on the subsistence nature of agriculture and low crop productivity. There has also not been much research on agrarian social relations between ‘tribe’ and ‘non-tribe’ communities in the region. This paper is an attempt to understand changing agrarian social relations between tribal and non-tribal communities relating them to different kinds of agrarian transitions taking place in two districts of Jharkhand - Palamu and Khunti. In the Palamu region, agrarian relations are dominated by the presence and significant population size of Hindu high caste land owners, whereas in the Khunti region, agrarian relations are characterized by the population size and dominance of tribes and lower caste land owner cum cultivators. The agrarian relations between ‘upper castes’ and ‘tribes’ in these regions are primarily related to agricultural daily wage labour. However, the agrarian social relations between Dalits and tribal people take the form of ‘communal system of labour exchange’ and ‘household-based labour’. In addition, the ethnographic study of the region depicts steady agrarian transitions (especially shift from indigenous to ‘High Yielding Variety’ (HYV) paddy seeds and growing vegetable cultivation) where ‘Non-Governmental Organizations’ (NGOs) and agricultural input manufacturers and suppliers are playing a critical role in agrarian transitions as intermediaries. While agricultural productivity still remains low, both the regions are witnessing slow but gradual agrarian transitions. Rural-urban linkages in the form of seasonal labour migration are creating capital and technical inflows that are transforming agricultural activities. This study describes and interprets the above changes through the lens of ‘regional rurality’.Keywords: agrarian transitions, rural Jharkhand, regional rurality, tribe and non-tribe
Procedia PDF Downloads 1831640 The Efficacy of Pre-Hospital Packed Red Blood Cells in the Treatment of Severe Trauma: A Retrospective, Matched, Cohort Study
Authors: Ryan Adams
Abstract:
Introduction: Major trauma is the leading cause of death in 15-45 year olds and a significant human, social and economic costs. Resuscitation is a stalwart of trauma management, especially in the pre-hospital environment and packed red blood cells (pRBC) are being increasingly used with the advent of permissive hypotension. The evidence in this area is lacking and further research is required to determine its efficacy. Aim: The aim of this retrospective, matched cohort study was to determine if major trauma patients, who received pre-hospital pRBC, have a difference in their initial emergency department cardiovascular status; when compared with injury-profile matched controls. Methods: The trauma databases of the Royal Brisbane and Women's Hospital, Royal Children's Hospital (Herston) and Queensland Ambulance Service were accessed and major trauma patient (ISS>12) data, who received pre-hospital pRBC, from January 2011 to August 2014 was collected. Patients were then matched against control patients that had not received pRBC, by their injury profile. The primary outcomes was cardiovascular status; defined as shock index and Revised Trauma Score. Results: Data for 25 patients who received pre-hospital pRBC was accessed and the injury profiles matched against suitable controls. On admittance to the emergency department, a statistically significant difference was seen in the blood group (Blood = 1.42 and Control = 0.97, p-value = 0.0449). However, the same was not seen with the RTS (Blood = 4.15 and Control 5.56, p-value = 0.291). Discussion: A worsening shock index and revised trauma score was associated with pre-hospital administration of pRBC. However, due to the small sample size, limited matching protocol and associated confounding factors it is difficult to draw any solid conclusions. Further studies, with larger patient numbers, are required to enable adequate conclusions to be drawn on the efficacy of pre-hospital packed red blood cell transfusion.Keywords: pre-hospital, packed red blood cells, severe trauma, emergency medicine
Procedia PDF Downloads 3911639 Magnitude of Meconium Stained Amniotic Fluid and Associated Factors among Women Who Gave Birth in North Shoa Zone Hospital’s Amhara Region Ethiopia 2022
Authors: Mitiku Tefera
Abstract:
Background: Meconium-stained amniotic fluid is one of the primary causes of birth asphyxia. Each year, over five million neonatal deaths occur worldwide due to meconium-stained amniotic fluid, with 90% of these deaths due to birth asphyxia. In Ethiopia meconium-stained amniotic fluid is under investigated, specifically in North Shoa Zone Amhara region Ethiopia. Objective: The aim of this study was to assess the magnitude of meconium-stained amniotic fluid and associated factors among women who gave birth in the North Shoa Zone Hospital’s Amhara Region, Ethiopia, in 2022. Methods: An institutional-based, cross-sectional study was conducted among 628 women who gave birth at North Shoa Zone Hospitals, Amhara, Ethiopia. The study was conducted from 08/June-08/August 2022. Two-stage cluster sampling was used to recruit study participants. The data was collected by using a structured interview-administered questionnaire and chart review. The collected data was entered into Epi-Data Version 4.6 and exported to SPSS Version 25. Logistics regression was employed, and a p-value <0.05 was considered significant. Result: The magnitude of meconium-stained amniotic fluid was 30.3%. Women presented with normal hematocrit level 83% less likely develop meconium-stained amniotic fluid. Women had mid-upper arm circumference value was less than 22.9cm(AOR=1.9; 95% CI;1.18-3.20), obstructed labor(AOR=3.6; 95% CI;1.48-8.83), prolonged labor ≥ 15hr (AOR=7.5; 95% CI ;7.68-13.3), the premature rapture of the membrane (AOR=1.7; 95% CI; 3.22-7.40), fetal tachycardia(AOR=6.2; 95% CI; 2.41-16.3) and Bradycardia (AOR=3.1; 95% CI;1.93-5.28) were significant association with meconium stained amniotic fluid. Conclusion: The magnitude of meconium-stained amniotic fluid, which was high. In this study, MUAC value <22.9 cm, obstructed and prolonged labor, PROM, bradycardia, and tachycardia were factors associated with meconium-stained amniotic fluid. A follow-up study and pooled similar articles will be mentioned for better evidence, enhancing intrapartum services and strengthening early detection of meconium-stained amniotic fluid for the health of the mother and baby.Keywords: women, meconium-staned amniotic fluid, magnitude, Ethiopia
Procedia PDF Downloads 1271638 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses
Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh
Abstract:
Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications
Procedia PDF Downloads 3151637 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2031636 Leaching of Metal Cations from Basic Oxygen Furnace (BOF) Steelmaking Slag Immersed in Water
Authors: Umashankar Morya, Somnath Basu
Abstract:
Metalloids like arsenic are often present as contaminants in industrial effluents. Removal of the same is essential before the safe discharge of the wastewater into the environment. Otherwise, these pollutants tend to percolate into aquifers over a period of time and contaminate drinking water sources. Several adsorbents, including metal powders, carbon nanotubes and zeolites, are being used for this purpose, with varying degrees of success. However, most of these solutions are not only costly but also not always readily available. This restricts their use, especially among financially weaker communities. Slag generated globally from primary steelmaking operations exceeds 200 billion kg every year. Some of it is utilized for applications like road construction, filler in reinforced concrete, railway track ballast and recycled into iron ore agglomeration processes. However, these usually involve low-value addition, and a significant amount of the slag still ends up in a landfill. However, there is a strong possibility that the constituents in the steelmaking slag may immobilize metalloid contaminants present in wastewater through a combination of adsorption and precipitation of insoluble product(s). Preliminary experiments have already indicated that exposure to basic oxygen steelmaking slag does reduce pollutant concentration in wastewater. In addition, the slag is relatively inexpensive and available in large quantities and in several countries across the world. Investigations on the mechanism of interactions at the water-solid interfaces have been in progress for some time. However, at the same time, there are concerns about the possibility of leaching of metal ions from the slag particles in concentrations greater than what exists in the water bodies where the “treated” wastewater would eventually be discharged. The effect of such leached ions on the aquatic flora and fauna is yet uncertain. This has prompted the present investigation, which focuses on the leaching of metal ions from steelmaking slag particles in contact with wastewater, and the influence of these ions on the removal of contaminant species. Experiments were carried out to quantify the leaching behavior of different ionic species upon exposure of the slag particles to simulated wastewater, both with and without specific metalloid contaminants.Keywords: slag, water, metalloid, heavy metal, wastewater
Procedia PDF Downloads 721635 “Post-Industrial” Journalism as a Creative Industry
Authors: Lynette Sheridan Burns, Benjamin J. Matthews
Abstract:
The context of post-industrial journalism is one in which the material circumstances of mechanical publication have been displaced by digital technologies, increasing the distance between the orthodoxy of the newsroom and the culture of journalistic writing. Content is, with growing frequency, created for delivery via the internet, publication on web-based ‘platforms’ and consumption on screen media. In this environment, the question is not ‘who is a journalist?’ but ‘what is journalism?’ today. The changes bring into sharp relief new distinctions between journalistic work and journalistic labor, providing a key insight into the current transition between the industrial journalism of the 20th century, and the post-industrial journalism of the present. In the 20th century, the work of journalists and journalistic labor went hand-in-hand as most journalists were employees of news organizations, whilst in the 21st century evidence of a decoupling of ‘acts of journalism’ (work) and journalistic employment (labor) is beginning to appear. This 'decoupling' of the work and labor that underpins journalism practice is far reaching in its implications, not least for institutional structures. Under these conditions we are witnessing the emergence of expanded ‘entrepreneurial’ journalism, based on smaller, more independent and agile - if less stable - enterprise constructs that are a feature of creative industries. Entrepreneurial journalism is realized in a range of organizational forms from social enterprise, through to profit driven start-ups and hybrids of the two. In all instances, however, the primary motif of the organization is an ideological definition of journalism. An example is the Scoop Foundation for Public Interest Journalism in New Zealand, which owns and operates Scoop Publishing Limited, a not for profit company and social enterprise that publishes an independent news site that claims to have over 500,000 monthly users. Our paper demonstrates that this journalistic work meets the ideological definition of journalism; conducted within the creative industries using an innovative organizational structure that offers a new, viable post-industrial future for journalism.Keywords: creative industries, digital communication, journalism, post industrial
Procedia PDF Downloads 2801634 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation
Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi
Abstract:
In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.Keywords: biomass burning, photochemical chamber, particulate matter, tracers
Procedia PDF Downloads 1921633 Using 3-Glycidoxypropyltrimethoxysilane Functionalized Silica Nanoparticles to Improve Flexural Properties of E-Glass/Epoxy Grid-Stiffened Composite Panels
Authors: Reza Eslami-Farsani, Hamed Khosravi, Saba Fayazzadeh
Abstract:
Lightweight and efficient structures have the aim to enhance the efficiency of the components in various industries. Toward this end, composites are one of the most widely used materials because of durability, high strength and modulus, and low weight. One type of the advanced composites is grid-stiffened composite (GSC) structures which have been extensively considered in aerospace, automotive, and aircraft industries. They are one of the top candidates for replacing some of the traditional components which are used here. Although there are a good number of published surveys on the design aspects and fabrication of GSC structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Matrix modification using nanoparticles is an effective method to enhance the flexural properties of the fibrous composites. In the present study, a silane coupling agent (3-glycidoxypropyltrimethoxysilane/3-GPTS) was introduced onto the silica (SiO2) nanoparticle surface and its effects on the three-point flexural response of isogrid E-glass/epoxy composites were assessed. Based on the fourier transform infrared spectrometer (FTIR) spectra, it was inferred that the 3-GPTS coupling agent was successfully grafted onto the surface of SiO2 nanoparticles after modification. Flexural test revealed an improvement of 16%, 14%, and 36% in stiffness, maximum load and energy absorption of the isogrid specimen filled with 3 wt.% 3-GPTS/SiO2 compared to the neat one. It would be worth mentioning that in these structures, a considerable energy absorption was observed after the primary failure related to the load peak. Also, 3-GPTMS functionalization had a positive effect on the flexural behavior of the multiscale isogrid composites. In conclusion, this study suggests that the addition of modified silica nanoparticles is a promising method to improve the flexural properties of the grid-stiffened fibrous composite structures.Keywords: isogrid-stiffened composite panels, silica nanoparticles, surface modification, flexural properties, energy absorption
Procedia PDF Downloads 2481632 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution
Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit
Abstract:
Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics
Procedia PDF Downloads 411631 Crop Losses, Produce Storage and Food Security, the Nexus: Attaining Sustainable Maize Production in Nigeria
Authors: Charles Iledun Oyewole, Harira Shuaib
Abstract:
While fulfilling the food security of an increasing population like Nigeria remains a major global concern, more than one-third of crop harvested is lost or wasted during harvesting or in postharvest operations. Reducing the harvest and postharvest losses, especially in developing countries, could be a sustainable solution to increase food availability, eliminate hunger and improve farmers’ livelihoods. Nigeria is one of the countries in sub-Saharan Africa with insufficient food production and high food import bill, which has had debilitating effects on the country’s economy. One of the goals of Nigeria’s agricultural development policy is to ensure that, the nation produces enough food and be less dependent on importation so as to ensure adequate and affordable food for all. Maize could fill the food gap in Nigeria’s effort to beat hunger and food insecurity. Maize is the most important cereal after rice and its production contributes immensely to food availability on the tables of many Nigerians. Maize grains constitute primary source of food for large percentage of the Nigerian populace, thus a considerable waste of this valuable food pre and post-harvest constitutes such a major agricultural bottleneck; that the reduction of pre and post-harvest losses is now a common food security strategy. In surveys conducted, as much as 60% maize outputs can be lost on the field and during the storage stage due to technical inefficiency. Field losses due to rodent damage alone can account for between 10% - 60% grain losses depending on the location. While the use of scientific storage methods can reduce losses below 2% in storage, timely harvesting of crop can check losses on the fields resulting from rodent damage or pest infestation. A push for increased crop production must be complemented by available and affordable post-harvest technologies that will reduce losses on farmers’ fields as well as in storage.Keywords: government policy, maize, population increase, storage, sustainable food production, yield, yield losses
Procedia PDF Downloads 1351630 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 711629 Examining E-learning Capability in Chinese Higher Education: A Case Study of Hong Kong
Authors: Elson Szeto
Abstract:
Over the past 15 years, digital technology has ubiquitously penetrated societies around the world. New values of e-learning are emerging in the preparation of future talents, while e-learning is a key driver of widening participation and knowledge transfer in Chinese higher education. As a vibrant, Chinese society in Asia, Hong Kong’s new generation university students, perhaps the digital natives, have been learning with e-learning since their basic education. They can acquire new knowledge with the use of different forms of e-learning as a generic competence. These students who embrace this competence further their study journeys in higher education. This project reviews the Government’s policy of Information Technology in Education which has largely put forward since 1998. So far, primary to secondary education has embraced advantages of e-learning capability to advance the learning of different subject knowledge. Yet, e-learning capacity in higher education is yet to be fully examined in Hong Kong. The study reported in this paper is a pilot investigation into e-learning capacity in Chinese higher education in the region. By conducting a qualitative case study of Hong Kong, the investigation focuses on (1) the institutional ICT settings in general; (2) the pedagogic responses to e-learning in specific; and (3) the university students’ satisfaction of e-learning. It is imperative to revisit the e-learning capacity for promoting effective learning amongst university students, supporting new knowledge acquisition and embracing new opportunities in the 21st century. As a pilot case study, data will be collected from individual interviews with the e-learning management team members of a university, teachers who use e-learning for teaching and students who attend courses comprised of e-learning components. The findings show the e-learning capacity of the university and the key components of leveraging e-learning capability as a university-wide learning settings. The findings will inform institutions’ senior management, enabling them to effectively enhance institutional e-learning capacity for effective learning and teaching and new knowledge acquisition. Policymakers will be aware of new potentials of e-learning for the preparation of future talents in this society at large.Keywords: capability, e-learning, higher education, student learning
Procedia PDF Downloads 2731628 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 79