Search results for: Newcomb’s problem
626 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 229625 Flipped Classroom in a European Public Health Program: The Need for Students' Self-Directness
Authors: Nynke de Jong, Inge G. P. Duimel-Peeters
Abstract:
The flipped classroom as an instructional strategy and a type of blended learning that reverses the traditional learning environment by delivering instructional content, off- and online, in- and outside the classroom, has been implemented in a 4-weeks module focusing on ageing in Europe at the Maastricht University. The main aim regarding the organization of this module was implementing flipped classroom-principles in order to create meaningful learning opportunities, while educational technologies are used to deliver content outside of the classroom. Technologies used in this module were an online interactive real time lecture from England, two interactive face-to-face lectures with visual supports, one group session including role plays and team-based learning meetings. The cohort of 2015-2016, using educational technologies, was compared with the cohort of 2014-2015 on module evaluation such as organization and instructiveness of the module, who studied the same content, although conforming the problem-based educational strategy, i.e. educational base of the Maastricht University. The cohort of 2015-2016 with its specific organization, was also more profound evaluated on outcomes as (1) experienced duration of the lecture by students, (2) experienced content of the lecture, (3) experienced the extent of the interaction and (4) format of lecturing. It was important to know how students reflected on duration and content taken into account their background knowledge so far, in order to distinguish between sufficient enough regarding prior knowledge and therefore challenging or not fitting into the course. For the evaluation, a structured online questionnaire was used, whereby above mentioned topics were asked for to evaluate by scoring them on a 4-point Likert scale. At the end, there was room for narrative feedback so that interviewees could express more in detail, if they wanted, what they experienced as good or not regarding the content of the module and its organization parts. Eventually, the response rate of the evaluation was lower than expected (54%), however, due to written feedback and exam scores, we dare to state that it gives a good and reliable overview that encourages to work further on it. Probably, the response rate may be explained by the fact that resit students were included as well, and that there maybe is too much evaluation as some time points in the program. However, overall students were excited about the organization and content of the module, but the level of self-directed behavior, necessary for this kind of educational strategy, was too low. They need to be more trained in self-directness, therefore the module will be simplified in 2016-2017 with more clear and fewer topics and extra guidance (step by step procedure). More specific information regarding the used technologies will be explained at the congress, as well as the outcomes (min and max rankings, mean and standard deviation).Keywords: blended learning, flipped classroom, public health, self-directness
Procedia PDF Downloads 219624 ISIS Women Recruitment in Spain and De-Radicalization Programs in Prisons
Authors: Inmaculada Yuste Martinez
Abstract:
Since July 5, 2014, Abubaker al Bagdadi, leader of the Islamic State since 2010 climbed the pulpit of the Great Mosque of Al Nuri of Mosul and proclaimed the Caliphate, the number of fighters who have travelled to Syria to join the Caliphate has increased as never before. Although it is true that the phenomenon of foreign fighters is not a new phenomenon, as it occurred after the Spanish Civil War, Republicans from Ireland and the conflict of the Balkans among others, it is highly relevant the fact that in this case, it has reached figures unknown in Europe until now. The approval of the resolution 2178 (2014) of the Security Council, foreign terrorist fighters placed the subject a priority position on the International agenda. The available data allow us to affirm that women have increasingly assumed operative functions in jihadist terrorism and in the activities linked to it in the development of attacks in the European Union, including minors and young adults. In the case of Spain, one in four of the detainees in 2016 were women, a significant increase compared to 2015. This contrasts with the fact that until 2014 no woman had been prosecuted in Spain for terrorist activities of a jihadist nature. It is fundamental when we talk about the prevention of radicalization and counterterrorism that we do not underestimate the potential threat to the security of countries like Spain that women from the West can assume to the global jihadist movement. This work aims to deepen the radicalization processes of these women and their profiles influencing the female inmate population. It also wants to focus on the importance of creating de-radicalization programs for these inmates since women are a crucial element in radicalization processes. A special focus it is made on young radicalized female inmate population as this target group is the most recoverable and on which it would result more fruitful to intervene. De-radicalization programs must also be designed to fit their profiles and circumstances; a sensitive environment will be prisons and juvenile centers, areas that until now had been unrelated to this problem and which are already hosting the first convicted in judicial offices in Spanish territory. A qualitative research and an empirical and analytical method has been implemented in this work, focused on the cases that took place in Spain of young women and the imaginary that the Islamic State uses for the processes of radicalization for this target group and how it does not fit with their real role in the Jihad, as opposed to other movements in which women do have a real and active role in the armed conflict as YPJ do it as a part of the armed wing of the Democratic Union Party of Syria.Keywords: caliphate, de-radicalization, foreign fighter, gender perspective, ISIS, jihadism, recruitment
Procedia PDF Downloads 171623 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 290622 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research
Authors: V. Bezverbny
Abstract:
The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification
Procedia PDF Downloads 193621 Solar Power Generation in a Mining Town: A Case Study for Australia
Authors: Ryan Chalk, G. M. Shafiullah
Abstract:
Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality
Procedia PDF Downloads 166620 Marginalized Children's Drawings Speak for Themselves: Self Advocacy for Protecting Their Rights
Authors: Bhavneet Bharti, Prahbhjot Malhi, Vandana Thakur
Abstract:
Introduction: Children of the urban migrant laborers have great difficulty in accessing government programs which are otherwise routinely available in rural settings. These include programs for child care, nutrition, health and education. There are major communicative fault-lines preventing advocacy for these marginalized children. The overarching aim of this study was to investigate the role of an innovative strategy of children’s drawings in supporting communication between children, social workers, pediatricians and other child advocates to fulfil their fundamental child rights. Materials and Methods: The data was collected over a period of one-year April 2015 to April 2016 during the routine visits by the members of the Social Pediatrics team including a social worker, pediatricians and an artist to the makeshift colony of migrant laborers. Once a week a drawing session was organized where the children including adolescents were asked to any drawing and provide a narrative thereafter. 5-30 children attended these weekly sessions for one year. All these drawings were then classified into various themes and exhibited on 16th April 2016 in the Govt. College of Art Museum. The forum was used for advocacy of Child Rights of these underprivileged children to Secretary social welfare. Results: Mean (SD) age of children in present observational study was 8.5 (2.5) years, with 60% of the boys. Majority of children demonstrated themes which were local and contextualized to their daily needs, threats and festivals which clearly underscored their fundamental right to basic services and equality of opportunities to achieve their full development Drawings of tap with flowing water, queues of people collecting water from hand pumps reflect the local problem of water availability for these children. Young children talking about fear of rape and murder following their drawings indicate the looming threat of potential abuse and neglect. Besides reality driven drawing, children also echoed supernatural beliefs, dangers and festivities in their drawings. Anyone who watched these children at work with art materials was able to see the intense level of absorption, clearly indicating the enjoyment they received, making it a meaningful activity. Indeed, this self-advocacy through art exhibition led to the successful establishment of mobile Anganwadi (A social safety net programme of the government) in their area of stay. Conclusions: This observational study is an example of how children were able to do self-advocacy to protect their rights. Of particular importance, these drawings address how psychologists and other child advocates can ensure in a child-centered manner that the voice of children is heard and represented in all assessments of their well-being and future care options.Keywords: child advocacy, children drawings, child rights, marginalized children
Procedia PDF Downloads 177619 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique
Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi
Abstract:
The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)
Procedia PDF Downloads 195618 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 157617 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 366616 No-Par Shares Working in European LLCs
Authors: Agnieszka P. Regiec
Abstract:
Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test
Procedia PDF Downloads 183615 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 136614 Effect of Reminiscence Therapy on the Sleep Quality of the Elderly Living in Nursing Homes
Authors: Güler Duru Aşiret
Abstract:
Introduction: Poor sleep quality is a common problem among the older people living in nursing homes. Our study aimed at assessing the effect of individual reminiscence therapy on the sleep quality of the elderly living in nursing homes. Methods: The study had 22 people in the intervention group and 24 people in the control group. The intervention group had reminiscence therapy once a week for 12 weeks in the form of individual sessions of 25-30 minutes. In our study, we first determined the dates suitable for the intervention group and researcher and planned the date and time of individual reminiscence therapies, which would take 12 weeks. While preparing this schedule, we considered subjects’ time schedules for their regular visits to health facilities and the arrival of their visitors. At this stage, the researcher informed the participants that their regular attendance in sessions would affect the intervention outcome. One topic was discussed every week. Weekly topics included: introduction in the first week; childhood and family life, school days, starting work and work life (a day at home for housewives), a fun day out of home, marriage (friendship for the singles), plants and animals they loved, babies and children, food and cooking, holidays and travelling, special days and celebrations, assessment and closure, in the following weeks respectively. The control group had no intervention. Study data was collected by using an introductory information form and the Pittsburgh Sleep Quality Index (PSQI). Results: In our study, participants’ average age was 76.02 ± 7.31. 58.7% of them were male and 84.8% were single. All of them had at least one chronic disease. 76.1% did not need help for performing their daily life activities. The length of stay in the institution was 6.32 ± 3.85 years. According to the participants’ descriptive characteristics, there was no difference between groups. While there was no statistically significant difference between the pretest PSQI median scores (p > 0.05) of both groups, PSQI median score had a statistically significant decrease after 12 weeks of reminiscence therapy (p < 0.05). There was no statistically significant change in the median scores of the subcomponents of sleep latency, sleep duration, sleep efficiency, sleep disturbance and use of sleep medication before and after reminiscence therapy. After the 12-weeks reminiscence therapy, there was a statistically significant change in the median scores for the PSQI subcomponents of subjective sleep quality (p<0.05). Conclusion: Our study found that reminiscence therapy increased the sleep quality of the elderly living in nursing homes. Acknowledgment: This study (project no 2017-037) was supported by the Scientific Research Projects Coordination Unit of Aksaray University. We thank the elderly subjects for their kind participation.Keywords: nursing, older people, reminiscence therapy, sleep
Procedia PDF Downloads 131613 Disclosure on Adherence of the King Code's Audit Committee Guidance: Cluster Analyses to Determine Strengths and Weaknesses
Authors: Philna Coetzee, Clara Msiza
Abstract:
In modern society, audit committees are seen as the custodians of accountability and the conscience of management and the board. But who holds the audit committee accountable for their actions or non-actions and how do we know what they are supposed to be doing and what they are doing? The purpose of this article is to provide greater insight into the latter part of this problem, namely, determine what best practises for audit committees and the disclosure of what is the realities are. In countries where governance is well established, the roles and responsibilities of the audit committee are mostly clearly guided by legislation and/or guidance documents, with countries increasingly providing guidance on this topic. With high cost involved to adhere to governance guidelines, the public (for public organisations) and shareholders (for private organisations) expect to see the value of their ‘investment’. For audit committees, the dividends on the investment should reflect in less fraudulent activities, less corruption, higher efficiency and effectiveness, improved social and environmental impact, and increased profits, to name a few. If this is not the case (which is reflected in the number of fraudulent activities in both the private and the public sector), stakeholders have the right to ask: where was the audit committee? Therefore, the objective of this article is to contribute to the body of knowledge by comparing the adherence of audit committee to best practices guidelines as stipulated in the King Report across public listed companies, national and provincial government departments, state-owned enterprises and local municipalities. After constructs were formed, based on the literature, factor analyses were conducted to reduce the number of variables in each construct. Thereafter, cluster analyses, which is an explorative analysis technique that classifies a set of objects in such a way that objects that are more similar are grouped into the same group, were conducted. The SPSS TwoStep Clustering Component was used, being capable of handling both continuous and categorical variables. In the first step, a pre-clustering procedure clusters the objects into small sub-clusters, after which it clusters these sub-clusters into the desired number of clusters. The cluster analyses were conducted for each construct and the measure, namely the audit opinion as listed in the external audit report, were included. Analysing 228 organisations' information, the results indicate that there is a clear distinction between the four spheres of business that has been included in the analyses, indicating certain strengths and certain weaknesses within each sphere. The results may provide the overseers of audit committees’ insight into where a specific sector’s strengths and weaknesses lie. Audit committee chairs will be able to improve the areas where their audit committee is lacking behind. The strengthening of audit committees should result in an improvement of the accountability of boards, leading to less fraud and corruption.Keywords: audit committee disclosure, cluster analyses, governance best practices, strengths and weaknesses
Procedia PDF Downloads 167612 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 123611 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India
Authors: Alok Saini, Rajkumar Ghosh
Abstract:
Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level
Procedia PDF Downloads 97610 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites
Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan
Abstract:
All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite
Procedia PDF Downloads 100609 Study of Polish and Ukrainian Volunteers Helping War Refugees. Psychological and Motivational Conditions of Coping with Stress of Volunteer Activity
Authors: Agata Chudzicka-Czupała, Nadiya Hapon, Liudmyla Karamushka, Marta żywiołek-Szeja
Abstract:
Objectives: The study is about the determinants of coping with stress connected with volunteer activity for Russo-Ukrainian war 2022 refugees. We examined the mental health reactions, chosen psychological traits, and motivational functions of volunteers working in Poland and Ukraine in relation to their coping with stress styles. The study was financed with funds from the Foundation for Polish Science in the framework of the FOR UKRAINE Programme. Material and Method: The study was conducted in 2022. The study was a quantitative, questionnaire-based survey. Data was collected through an online survey. The volunteers were asked to assess their propensity to use different styles of coping with stress connected with their activity for Russo-Ukrainian war refugees using The Brief Coping Orientation to Problems Experienced Inventory (Brief-COPE) questionnaire. Depression, anxiety, and stress were measured using the Depression, Anxiety, and Stress (DASS)-21 item scale. Chosen psychological traits, psychological capital and hardiness, were assessed by The Psychological Capital Questionnaire and The Norwegian Revised Scale of Hardiness (DRS-15R). Then The Volunteer Function Inventory (VFI) was used. The significance of differences between the variable means of the samples was tested by the Student's t-test. We used multivariate linear regression to identify factors associated with coping with stress styles separately for each national sample. Results: The sample consisted of 720 volunteers helping war refugees (in Poland, 435 people, and 285 in Ukraine). The results of the regression analysis indicate variables that are significant predictors of the propensity to use particular styles of coping with stress (problem-focused style, emotion-focused style and avoidant coping). These include levels of depression and stress, individual psychological characteristics and motivational functions, different for Polish and Ukrainians. Ukrainian volunteers are significantly more likely to use all three coping with stress styles than Polish ones. The results also prove significant differences in the severity of anxiety, stress and depression, the selected psychological traits and motivational functions studied, which led volunteers to participate in activities for war refugees. Conclusions: The results show that depression and stress severity, as well as psychological capital and hardiness, and motivational factors are connected with coping with stress behavior. The results indicate the need for increased attention to the well-being of volunteers acting under stressful conditions. They also prove the necessity of guiding the selection of people for specific types of voluKeywords: anxiety, coping with stress styles, depression, hardiness, mental health, motivational functions, psychological capital, resilience, stress, war, volunteer, civil society
Procedia PDF Downloads 71608 Simple and Effective Method of Lubrication and Wear Protection
Authors: Buddha Ratna Shrestha, Jimmy Faivre, Xavier Banquy
Abstract:
By precisely controlling the molecular interactions between anti-wear macromolecules and bottle-brush lubricating molecules in the solution state, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. Our results provide rational guides to design such fluids for virtually any type of surfaces. The lowest friction coefficient and the maximum pressure that it can sustain is 5*10-3 and 2.5 MPa which is close to the physiological pressure. Lubricating and protecting surfaces against wear using liquid lubricants is a great technological challenge. Until now, wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface while lubrication was provided by a lubricating fluid. Hence, we here research for a simple, effective and applicable solution to the above problem using surface force apparatus (SFA). SFA is a powerful technique with sub-angstrom resolution in distance and 10 nN/m resolution in interaction force while performing friction experiment. Thus, SFA is used to have the direct insight into interaction force, material and friction at interface. Also, we always know the exact contact area. From our experiments, we found that by precisely controlling the molecular interactions between anti-wear macromolecules and lubricating molecules, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. The lowest friction coefficient and the maximum pressure that it can sustain in our system is 5*10-3 and 2.5 GPA which is well above the physiological pressure. Our results provide rational guides to design such fluids for virtually any type of surfaces. Most importantly this process is simple, effective and applicable method of lubrication and protection as until now wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface. Currently, the frictional data that are obtained while sliding the flat mica surfaces are compared and confirmed that a particular mixture of solution was found to surpass all other combination. So, further we would like to confirm that the lubricating and antiwear protection remains the same by performing the friction experiments in synthetic cartilages.Keywords: bottle brush polymer, hyaluronic acid, lubrication, tribology
Procedia PDF Downloads 264607 Effect of Tooth Bleaching Agents on Enamel Demineralisation
Authors: Najlaa Yousef Qusti, Steven J. Brookes, Paul A. Brunton
Abstract:
Background: Tooth discoloration can be an aesthetic problem, and tooth whitening using carbamide peroxide bleaching agents are a popular treatment option. However, there are concerns about possible adverse effects such as demineralisation of the bleached enamel; however, the cause of this demineralisation is unclear. Introduction: Teeth can become stained or discoloured over time. Tooth whitening is an aesthetic solution for tooth discoloration. Bleaching solutions of 10% carbamide peroxide (CP) have become the standard agent used in dentist-prescribed and home-applied ’vital bleaching techniques’. These materials release hydrogen peroxide (H₂O₂), the active whitening agent. However, there is controversy in the literature regarding the effect of bleaching agents on enamel integrity and enamel mineral content. The purpose of this study was to establish if carbamide peroxide bleaching agents affect the acid solubility of enamel (i.e., make teeth more prone to demineralisation). Materials and Methods: Twelve human premolar teeth were sectioned longitudinally along the midline and varnished to leave the natural enamel surface exposed. The baseline behavior of each tooth half in relation to its demineralisation in acid was established by sequential exposure to 4 vials containing 1ml of 10mM acetic acid (1 minute/vial). This was followed by exposure to 10% CP for 8 hours. After washing in distilled water, the tooth half was sequentially exposed to 4 further vials containing acid to test if the acid susceptibility of the enamel had been affected. The corresponding tooth half acted as a control and was exposed to distilled water instead of CP. The mineral loss was determined by measuring [Ca²⁺] and [PO₄³⁻] released in each vial using a calcium ion-selective electrode and the phosphomolybdenum blue method, respectively. The effect of bleaching on the tooth surfaces was also examined using SEM. Results: Exposure to carbamide peroxide did not significantly alter the susceptibility of enamel to acid attack, and SEM of the enamel surface revealed a slight alteration in surface appearance. SEM images of the control enamel surface showed a flat enamel surface with some shallow pits, whereas the bleached enamel appeared with an increase in surface porosity and some areas of mild erosion. Conclusions: Exposure to H₂O₂ equivalent to 10% CP does not significantly increase subsequent acid susceptibility of enamel as determined by Ca²⁺ release from the enamel surface. The effects of bleaching on mineral loss were indistinguishable from distilled water in the experimental system used. However, some surface differences were observed by SEM. The phosphomolybdenum blue method for phosphate is compromised by peroxide bleaching agents due to their oxidising properties. However, the Ca²⁺ electrode is unaffected by oxidising agents and can be used to determine the mineral loss in the presence of peroxides.Keywords: bleaching, carbamide peroxide, demineralisation, teeth whitening
Procedia PDF Downloads 127606 The Impact of Universal Design for Learning Implementation on Teaching Practices for Students with Intellectual Disabilities in the Kingdom of Saudi Arabia
Authors: Adnan Alhazmi
Abstract:
Background: UDL can be understood as a framework that holds the potential to elaborate the alternatives and platforms for the students with intellectual disabilities within general education settings and aims at offering flexible pathways that can support all the students in gaining a mastering over the goals of learning. This system of learning addresses the problem of the variability of the learner by delineating the diverse ways in which the individuals can understand, conceive, express and deal with the information. Goal: The aim of the proposed research is to examine the impact of the implementation of UDL in teaching practices for the students with intellectual disabilities in Saudi Arabian schools. Method: This research has used a combination of quantitative and qualitative designs. Survey questionnaires were used to gather the data for under this analytical descriptive method. The application of the qualitative interpretive approach was applied with the help of the interview to gather a detailed understanding on the aim of the research. For this purpose, the semi-structured interviews were conducted. Thus, the primary data will be gathered with the help of survey and interview to examine the impact of universal design learning implementation on teaching practices for intellectually disabled students in Saudi Arabian schools. The survey was conducted to examine the prevailing teaching practices for the students with intellectual disabilities in Saudi Arabia and evaluate if the teaching experience influences the current practices or not. The surveys were distributed to 50 teachers who teach the students with intellectual disabilities. However, the interviews were conducted to explore barriers of implementing UDL in Saudi Arabia and provide suggested guideline for the implementation of UDL in Saudi Arabia. The interviews, therefore, were with 10 teachers teaching the same subject. Findings: A key findings highlighted in this study revealed that the UDL framework serves as a crucial guide for teachers within inclusive settings to undertake meaningful planning for the individuals with intellectual disabilities so that they are able to access, participate, and grow within the general education curriculum. Other findings of the study highlighted the need to prepare the educators and all faculty members to understand the purpose and need for inclusion, the UDL framework so that better information about academic and social expectations for individuals with intellectual disabilities can be delivered. Conclusion: On the basis of the preliminary study undertaken on the subject of research, it could be suggested that UDL can serve to be an effective support for undertaking a meaningful inclusion of students with intellectual disability (ID) in general educational settings. It holds the potential role of working as an institutional design framework that could be used for designing curriculum for students with intellectual disabilities.Keywords: intellectual disability, inclusion, universal design for learning, teaching practice
Procedia PDF Downloads 139605 Effect of E-Governance and E-Learning Platform on Access to University Education by Public Servants in Nigeria
Authors: Nwamaka Patricia Ibeme, Musa Zakari
Abstract:
E-learning is made more effective because; it is enable student to students to easily interact, share, and collaborate across time and space with the help of e-governance platform. Zoom and the Microsoft classroom team can invite students from all around the world to join a conversation on a certain subject simultaneously. E-governance may be able to work on problem solving skills, as well as brainstorming and developing ideas. As a result of the shared experiences and knowledge, students are able to express themselves and reflect on their own learning." For students, e-governance facilities provide greater opportunity for students to build critical (higher order) thinking abilities through constructive learning methods. Students' critical thinking abilities may improve with more time spent in an online classroom. Students' inventiveness can be enhanced through the use of computer-based instruction. Discover multimedia tools and produce products in the styles that are easily available through games, Compact Disks, and television. The use of e-learning has increased both teaching and learning quality by combining student autonomy, capacity, and creativity over time in developed countries." Teachers are catalysts for the integration of technology through Information and Communication Technology, and e-learning supports teaching by simplifying access to course content." Creating an Information and Communication Technology class will be much easier if educational institutions provide teachers with the assistance, equipment, and resources they need. The study adopted survey research design. The populations of the study are Students and staff. The study adopted a simple random sampling technique to select a representative population. Both primary and secondary method of data collection was used to obtain the data. A chi-square statistical technique was used to analyze. Finding from the study revealed that e-learning has increase accesses to universities educational by public servants in Nigeria. Public servants in Nigeria have utilized e-learning and Online Distance Learning (ODL) programme to into various degree programmes. Finding also shows that E-learning plays an important role in teaching because it is oriented toward the use of information and communication technologies that have become a part of the everyday life and day-to-day business. E-learning contributes to traditional teaching methods and provides many advantages to society and citizens. The study recommends that the e-learning tools and internet facilities should be upgrade to foster any network challenges in the online facilitation and lecture delivery system.Keywords: E-governance, E-learning, online distance learning, university education public servants, Nigeria
Procedia PDF Downloads 69604 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector
Authors: Dewan Ahsan
Abstract:
Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.Keywords: green energy, offshore, safety, Denmark
Procedia PDF Downloads 214603 Bacteriophage Is a Novel Solution of Therapy Against S. aureus Having Multiple Drug Resistance
Authors: Sanjay Shukla, A. Nayak, R. K. Sharma, A. P. Singh, S. P. Tiwari
Abstract:
Excessive use of antibiotics is a major problem in the treatment of wounds and other chronic infections, and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most promising approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of the present study was to evaluate the efficacy of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by the double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in the double agar overlay method out of 150 sewage samples. In TEM, recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9, and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate were very safe, did not show any appearance of abscess formation, which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureuswhich indicates that they are good prophylactic agent. The S. aureusinoculated mice were completely recovered by bacteriophage administration with 100% recovery, which was very good as compere to conventional therapy. In the present study, ten chronic cases of the wound were treated with phage lysate, and follow up of these cases was done regularly up to ten days (at 0, 5, and 10 d). The result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for the treatment of septic chronic wounds.Keywords: phage therapy, S aureus, antimicrobial resistance, lytic phage, and bacteriophage
Procedia PDF Downloads 117602 The Affordable Housing Problems of Elderly Households in the Istanbul Metropolitan Area
Authors: Elifsu Sahin
Abstract:
In the world and in Turkey, approximately 1 in 10 people is 65 years of age or older. The age group of 65 and over is the fastest-growing age group since 1990. This demographic aging trend and demographic transformation have spread over a long period in Western Europe and North America, while in Turkey, they have occurred over a relatively short period. The aging of the population poses many challenges in terms of housing supply, housing satisfaction, and economic access to housing, due to factors such as a decrease in the number of people in households, low incomes, and increased time spent in housing and housing neighborhoods. On the other hand, since 2000, neoliberal economic policies and government policies have led to serious growth in the construction and housing sectors in Turkey. During this process, the housing market in Turkey generally produced housing for high-income groups and foreigners. Housing has become an investment instrument, and rising housing prices and rents have seriously reduced both the affordability of housing and households' chances of living in healthy housing. Housing has become a growing problem for vulnerable groups such as low- and middle-income households, students, refugees, and the elderly. Moreover, in recent years, international migration, pandemics, economic crises, inflation, and the expected Istanbul earthquake have raised housing prices and rent in Turkey as a whole, especially in Istanbul. The aim of the study is to investigate how elderly households that don't own homes deal with the economic accessibility of housing and other affordability-related housing problems in the Istanbul Metropolitan Area today, when housing becomes an investment instrument, the issue of social housing is not on the agenda, and households can be added to the market according to their ability to pay. A complex method was adopted in the research, using a combination of various statistical data and interview findings. Based on household income, in-depth interviews were conducted with 100 elderly households who don't own their own homes and were randomly selected in identified neighborhoods, analyzing the micro-area within the districts in the Istanbul Metropolitan Area, where middle- and low-income households are concentrated. The study found that more than 50% of the net income of elderly households was spent on rent and other housing expenses. Some of the households said that they restrict spending on food, health, and entertainment because of their housing expenses. Among the findings of the study is that households receive financial support from their children or move into their children’s house for economic reasons. Due to the decrease in household income, especially after the loss of a spouse, the single individual moves into their children’s house. Moreover, some of the interviewed households had to change their house and move to a smaller, lower-rent house on the urban periphery for economic reasons after retirement, especially after 2020, despite their unwillingness.Keywords: affordable housing, elderly households, housing policy, istanbul metropolitan area
Procedia PDF Downloads 34601 Fast-Tracking University Education for Youth Employment: Empirical Evidence from University Graduates in Rwanda
Authors: Fred Alinda, Marjorie Negesa, Gerald Karyeija
Abstract:
Like elsewhere in the world, youth unemployment remains a big problem more so to the most educated youth and female. In Rwanda, unemployment is estimated at 13.2% among youth graduates compared to 10.9% and 2.6 among secondary and primary graduates respectively. Though empirical evidence elsewhere associate youth unemployment with education level, relevance of skills and access to business support opportunities, mixed evidence still exist on the significance of these factors to youth employment. As youth employment strategies in countries like Rwanda continue to recognize the potential role university education can play to enhance employment, there is a need to understand the catalysts or barriers. This paper, therefore, draws empirical evidence from a survey on the influence of education qualification, skills relevance and access to business support opportunities on employment of the youth university graduates in Masaka sector, Rwanda. The analysis tested four hypotheses; access to university education significantly affects youth employment, Relevance of university education significantly contributes to youth employment; access to business support opportunities significantly contributes to youth employment, and significant gender differences exist in the employment of youth university graduates. A cross-section survey was used in lieu of the need to explore the prevailing status of youth employment and contributing factors across the sector. A questionnaire was used to collect data on a large sample of 269 youth to allow statistical analysis. This was beefed up with qualitative views of leaders and technical officials in the sector. The youth University graduates were selected using simple random sampling while the leaders and technical officials were selected purposively. Percentages were used to describe respondents in line with the variables under while a regression model for youth employment was fitted to determine the significant factors. The model results indicated a significant influence (p<0.05) of gender, education level and access to business support opportunities on employment of youth university graduates. This finding was also affirmed by the qualitative views of key informants. Qualitative views pointed to the fact that university education generally equipped the youth with skills that enabled their transition into employment mainly for a salary or wage. The skills were, however, deficient in technical and practical aspects. In addition, the youth generally lacked limited access to business support opportunities particularly guarantees for loans, business advisory, and grants for business as well as training in business skills that would help them gain salaried employment or transit into self-employment. The study findings bear an implication on the strategy for catalyzing youth employment through university education. The findings imply that university education should be embraced but with greater emphasis on or supplementation with specialized training in practical and technical skills as well as extending business support opportunities to the youth. This will accelerate the contribution of university education to youth employment.Keywords: education, employment, self-employment, youth
Procedia PDF Downloads 256600 Integration of Entrepreneurial Mindset Learning in Green Chemistry and Processes Course
Authors: Tsvetanka Filipova
Abstract:
Entrepreneurial mindset learning (EML) is the combined process of instilling curiosity and invention, developing insight and value creation while building on other active pedagogy, such as project-based learning (PBL). It is essential to introduce students to chemistry and chemical engineering entrepreneurship in a manner that gives a holistic approach by first educating students on diverse entrepreneurial skills and then providing an opportunity to build their innovation. Chemistry and chemical engineering students have an opportunity to be engaged in an entrepreneurial class project in the Green Chemistry and Processes course at South Dakota Mines. The course provides future chemists and chemical engineers with the knowledge and skills required to enable them to design materials and processes in an environmentally benign way. This paper presents findings from implementing an open-ended design project in the Green Chemistry and Processes course. The goal of this team project is to have student teams design sustainable polymer materials to fulfill a need and/or opportunity related to a fictitious aerospace company that satisfies technical, safety, environmental, regulatory, economic, and social needs. Each student team is considered a start-up company charged with the task of designing sustainable polymer materials for aerospace applications. Through their work on the project, students utilize systems and entrepreneurial thinking in selecting their design project, being aware of the existent technologies (literature and patent search) and users and clients (connections), determining the goals and motivations (creating value), and what need or problem they are trying to address (curiosity). The project draws systems boundaries by focusing on student exploration of feedstocks to end-of-life of polymeric materials and products. Additional subtopics to explore are green processes for syntheses, green engineering for process design, and the economics of sustainable polymers designed for circularity. Project deliverables are team project reports and project presentations to a panel of industry, chemistry, and engineering professionals. Project deliverables are team project reports and project presentations to a panel of industry, chemistry, and engineering professionals. The impact of the entrepreneurial mindset project is evaluated through a student survey at the end of the semester. It has been found that the Innovative Solution project was excellent in promoting student curiosity, creativity, critical and systems thinking and teamwork. The results of this study suggest that incorporating EML positively impacted students’ professional skill development, their ability to understand and appreciate the socio-technical context of chemistry and engineering, and the cultivation of an entrepreneurial mindset to discover, evaluate and exploit opportunities.Keywords: curriculum, entrepreneurial mindset learning, green chemistry and engineering, systems thinking
Procedia PDF Downloads 4599 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering
Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher
Abstract:
Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing
Procedia PDF Downloads 169598 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures
Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny
Abstract:
Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.Keywords: pyrolysis, torrefaction, biooil, lignin
Procedia PDF Downloads 329597 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 352