Search results for: John Peter V. Dacanay
165 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 313164 Prophylactic Effect of Dietary Garlic (Allium sativum) Inclusion in Feed of Commercial Broilers with Coccidiosis Raised at the Experimental Animal Unit of the Department of Veterinary Medicine, University of Ibadan, Oyo State, Nigeria
Authors: Ogunlesi Olufunso, John Ogunsola, Omolade Oladele, Benjamin Emikpe
Abstract:
Context: Coccidiosis is a parasitic disease that affects poultry production, leading to economic losses. Garlic is known for medicinal properties and has been used as a natural remedy for various diseases. This study aims to investigate the prophylactic effect of garlic inclusion in the feed of commercial broilers with coccidiosis. Research Aim: The aim of this study is to determine the possible effect of garlic meal inclusion in poultry feed on the body weight gain of commercial broilers and to investigate it's therapeutic effect on broilers with coccidiosis. Methodology: The study conducted a case-control study for eight weeks with One hundred Arbor acre commercial broilers separated into five (5) groups from day-old, where 6,000 Eimeria oocysts were orally inoculated into each broiler in the different groups. Feed intake, body weight gain, feed conversion ratio, oocyt shedding rate, histopathology and erythrocyte indices were assessed. Findings: The inclusion of garlic meal in the broilers' diet resulted in an improved feed conversion ratio, decreased oocyst counts, reduced diarrhoeic fecal spots, decreased susceptibility to coccidial infection, and increased packed cell volume (PCV). Theoretical Importance: This study contributes to the understanding of the prophylactic effect of garlic supplementation, including its antiparasitic properties on commercial broilers with coccidiosis. It highlights the potential use of non-conventional feed additives or ayurvedic herb and spices in the treatment of poultry diseases. Data Collection and Analysis Procedures: The study collected data on feed intake, body weight gain, oocyst shedding rate, histopathological observations, and erythrocyte indices. Data were analyzed using Analysis of Variance and Duncan's Multiple range Test. Questions Addressed: The study addressed the possible effect of garlic meal inclusion in poultry feed on the body weight gain of broilers and its therapeutic effect on broilers with coccidiosis. Conclusion: The study concludes that garlic inclusion in the feed of broilers has a prophylactic effect, including antiparasitic properties, resulting in improved feed conversion ratio, reduced oocyst counts and increased PCV.Keywords: broilers, eimeria spp, garlic, Ibadan
Procedia PDF Downloads 88163 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions
Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams
Abstract:
The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.Keywords: architecture, central pavilions, classicism, machine learning
Procedia PDF Downloads 140162 Blended Cloud Based Learning Approach in Information Technology Skills Training and Paperless Assessment: Case Study of University of Cape Coast
Authors: David Ofosu-Hamilton, John K. E. Edumadze
Abstract:
Universities have come to recognize the role Information and Communication Technology (ICT) skills plays in the daily activities of tertiary students. The ability to use ICT – essentially, computers and their diverse applications – are important resources that influence an individual’s economic and social participation and human capital development. Our society now increasingly relies on the Internet, and the Cloud as a means to communicate and disseminate information. The educated individual should, therefore, be able to use ICT to create and share knowledge that will improve society. It is, therefore, important that universities require incoming students to demonstrate a level of computer proficiency or trained to do so at a minimal cost by deploying advanced educational technologies. The training and standardized assessment of all in-coming first-year students of the University of Cape Coast in Information Technology Skills (ITS) have become a necessity as students’ most often than not highly overestimate their digital skill and digital ignorance is costly to any economy. The one-semester course is targeted at fresh students and aimed at enhancing the productivity and software skills of students. In this respect, emphasis is placed on skills that will enable students to be proficient in using Microsoft Office and Google Apps for Education for their academic work and future professional work whiles using emerging digital multimedia technologies in a safe, ethical, responsible, and legal manner. The course is delivered in blended mode - online and self-paced (student centered) using Alison’s free cloud-based tutorial (Moodle) of Microsoft Office videos. Online support is provided via discussion forums on the University’s Moodle platform and tutor-directed and assisted at the ICT Centre and Google E-learning laboratory. All students are required to register for the ITS course during either the first or second semester of the first year and must participate and complete it within a semester. Assessment focuses on Alison online assessment on Microsoft Office, Alison online assessment on ALISON ABC IT, Peer assessment on e-portfolio created using Google Apps/Office 365 and an End of Semester’s online assessment at the ICT Centre whenever the student was ready in the cause of the semester. This paper, therefore, focuses on the digital culture approach of hybrid teaching, learning and paperless examinations and the possible adoption by other courses or programs at the University of Cape Coast.Keywords: assessment, blended, cloud, paperless
Procedia PDF Downloads 248161 Impact of Emotional Intelligence and Cognitive Intelligence on Radio Presenter's Performance in All India Radio, Kolkata, India
Authors: Soumya Dutta
Abstract:
This research paper aims at investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance in the All India Radio, Kolkata (India’s public service broadcaster). The ancient concept of productivity is the ratio of what is produced to what is required to produce it. But, father of modern management Peter F. Drucker (1909-2005) defined productivity of knowledge work and knowledge workers in a new form. In the other hand, the concept of Emotional Intelligence (EI) originated back in 1920’s when Thorndike (1920) for the first time proposed the emotional intelligence into three dimensions, i.e., abstract intelligence, mechanical intelligence, and social intelligence. The contribution of Salovey and Mayer (1990) is substantive, as they proposed a model for emotional intelligence by defining EI as part of the social intelligence, which takes measures the ability of an individual to regulate his/her personal and other’s emotions and feeling. Cognitive intelligence illustrates the specialization of general intelligence in the domain of cognition in ways that possess experience and learning about cognitive processes such as memory. The outcomes of past research on emotional intelligence show that emotional intelligence has a positive effect on social- mental factors of human resource; positive effects of emotional intelligence on leaders and followers in terms of performance, results, work, satisfaction; emotional intelligence has a positive and significant relationship with the teachers' job performance. In this paper, we made a conceptual framework based on theories of emotional intelligence proposed by Salovey and Mayer (1989-1990) and a compensatory model of emotional intelligence, cognitive intelligence, and job performance proposed by Stephen Cote and Christopher T. H. Miners (2006). For investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance, sample size consists 59 radio presenters (considering gender, academic qualification, instructional mood, age group, etc.) from All India Radio, Kolkata station. Questionnaires prepared based on cognitive (henceforth called C based and represented by C1, C2,.., C5) as well as emotional intelligence (henceforth called E based and represented by E1, E2,., E20). These were sent to around 59 respondents (Presenters) for getting their responses. Performance score was collected from the report of program executive of All India Radio, Kolkata. The linear regression has been carried out using all the E-based and C-based variables as the predictor variables. The possible problem of autocorrelation has been tested by having the Durbinson-Watson (DW) Statistic. Values of this statistic, almost within the range of 1.80-2.20, indicate the absence of any significant problem of autocorrelation. The possible problem of multicollinearity has been tested by having the Variable Inflation Factor (VIF) value. Values of this statistic, around within 2, indicates the absence of any significant problem of multicollinearity. It is inferred that the performance scores can be statistically regressed linearly on the E-based and C-based scores, which can explain 74.50% of the variations in the performance.Keywords: cognitive intelligence, emotional intelligence, performance, productivity
Procedia PDF Downloads 163160 Virtue Ethics as a Corrective to Mismanagement of Resources in Nigeria’s Economy: Akwa Ibom State Experience
Authors: Veronica Onyemauwa
Abstract:
This research work examines the socio-ethical issues embedded in resource management and wealth creation in Nigeria, using Akwa Ibom State as a case study. The work is poised to proffer answers to the problematic questions raised, “why is the wealth of Akwa Ibom State not prudently managed, and wastages curbed in order to cater for the satisfaction of the indigent citizens, as Jesus Christ did in the feeding of five thousand people (John 6:12) ? Could ethical and responsible resource management not solve the paradox of poverty stricken people of Akwa Ibom in a rich economy? What ought to be done to better the lot of Akwa Ibomites? The research adopts phenomenological and sociological research methodology with primary and secondary sources of information to explore the socio-ethical issues embedded in resource management and wealth creation in Akwa Ibom State. Findings revealed that, reckless exploitation and mismanagement of the rich natural and human resources of Akwa Ibom State have spelt doom to the economic progress and survival of Akwa Ibomites in particular and Nigerians in general. Hence, hunger and poverty remain adversaries to majority of the people. Again, the culture of diversion of funds and squandermania institutionalized within the confine of Akwa Ibom State government, deter investment in economic enterprises, job and wealth creation that would have yielded economic dividends for Akwa Ibomites. These and many other unwholesome practices are responsible for the present deplorable condition of Akwa Ibom State in particular and Nigerian society in general. As a way out of this economic quagmire, it is imperative that, every unwholesome practice within the State be tackled more proactively and innovatively in the interest of the masses through responsible resource management and wealth creation. It is believed that, an effective leadership, a statesman with vision and commitment would transform the abundant resources to achieve meaningful development, create wealth and reduce poverty. Ethical leadership is required in all the tiers of government and public organizations to transform resources into more wealth. Thus, this paper advocates for ethics of virtue: a paradigm shift from exploitative leadership style to productive leadership style; change from atomistic human relation to corporative human relation; change from being subsistence to abundant in other to maximize the available resources in the State. To do otherwise is unethical and lack moral justification.Keywords: corrective, mismanagement, resources, virtue ethics
Procedia PDF Downloads 113159 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions
Authors: Peter Carter
Abstract:
Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”Keywords: climate change, climate change indicators, climate change trends, climate system change interactions
Procedia PDF Downloads 105158 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 225157 Studying the Beginnings of Strategic Behavior
Authors: Taher Abofol, Yaakov Kareev, Judith Avrahami, Peter M. Todd
Abstract:
Are children sensitive to their relative strength in competitions against others? Performance on tasks that require cooperation or coordination (e.g. the Ultimatum Game) indicates that early precursors of adult-like notions of fairness and reciprocity, as well as altruistic behavior, are evident at an early age. However, not much is known regarding developmental changes in interactive decision-making, especially in competitive interactions. Thus, it is important to study the developmental aspects of strategic behavior in these situations. The present research focused on cognitive-developmental changes in a competitive interaction. Specifically, it aimed at revealing how children engage in strategic interactions that involve the allocation of limited resources over a number of fields of competition, by manipulating relative strength. Relative strength refers to situations in which player strength changes midway through the game: the stronger player becomes the weaker one, while the weaker player becomes the stronger one. An experiment was conducted to find out if the behavior of children of different age groups differs in the following three aspects: 1. Perception of relative strength. 2. Ability to learn while gaining experience. 3. Ability to adapt to change in relative strength. The task was composed of a resource allocation game. After the players allocated their resources (privately and simultaneously), a competition field was randomly chosen for each player. The player who allocated more resources to the field chosen was declared the winner of that round. The resources available to the two competitors were unequal (or equal, for control). The theoretical solution for this game is that the weaker player should give up on a certain number of fields, depending on the stronger opponent’s relative strength, in order to be able to compete with the opponent on equal footing in the remaining fields. Participants were of three age groups, first-graders (N = 36, mean age = 6), fourth-graders (N = 36, mean age = 10), and eleventh-graders (N = 72, mean age = 16). The games took place between players of the same age and lasted for 16 rounds. There were two experimental conditions – a control condition, in which players were of equal strength, and an experimental condition, in which players differed in strength. In the experimental condition, players' strength was changed midway through the session. Results indicated that players in all age groups were sensitive to their relative strength, and played in line with the theoretical solution: the weaker players gave up on more fields than the stronger ones. This understanding, as well as the consequent difference in allocation between weak and strong players, was more pronounced among older participants. Experience led only to minimal behavioral change. Finally, the children from the two older groups, particularly the eleventh graders adapted quickly to the midway switch in relative strength. In contrast, the first-graders hardly changed their behavior with the change in their relative strength, indicating a limited ability to adapt. These findings highlight young children’s ability to consider their relative strength in strategic interactions and its boundaries.Keywords: children, competition, decision making, developmental changes, strategic behavior
Procedia PDF Downloads 312156 Re-interpreting Ruskin with Respect to the Wall
Authors: Anjali Sadanand, R. V. Nagarajan
Abstract:
Architecture morphs with advances in technology and the roof, wall, and floor as basic elements of a building, follow in redefining themselves over time. Their contribution is bound by time and held by design principles that deal with function, sturdiness, and beauty. Architecture engages with people to give joy through its form, material, design structure, and spatial qualities. This paper attempts to re-interpret John Ruskin’s “Seven lamps of Architecture” in the context of the architecture of the modern and present period. The paper focuses on the “wall” as an element of study in this context. Four of Ruskin’s seven lamps will be discussed, namely beauty, truth, life, and memory, through examples of architecture ranging from modernism to contemporary architecture of today. The study will focus on the relevance of Ruskin’s principles to the “wall” in specific, in buildings of different materials and over a range of typologies from all parts of the world. Two examples will be analyzed for each lamp. It will be shown that in each case, there is relevance to the significance of Ruskin’s lamps in modern and contemporary architecture. Nature to which Ruskin alludes to for his lamp of “beauty” is found in the different expressions of interpretation used by Corbusier in his Villa Stein façade based on proportion found in nature and in the direct expression of Toyo Ito in his translation of an understanding of the structure of trees into his façade design of the showroom for a Japanese bag boutique. “Truth” is shown in Mies van der Rohe’s Crown Hall building with its clarity of material and structure and Studio Mumbai’s Palmyra House, which celebrates the use of natural materials and local craftsmanship. “Life” is reviewed with a sustainable house in Kerala by Ashrams Ravi and Alvar Aalto’s summer house, which illustrate walls as repositories of intellectual thought and craft. “Memory” is discussed with Charles Correa’s Jawahar Kala Kendra and Venturi’s Vana Venturi house and discloses facades as text in the context of its materiality and iconography. Beauty is reviewed in Villa Stein and Toyo Ito’s Branded Retail building in Tokyo. The paper thus concludes that Ruskin’s Lamps can be interpreted in today’s context and add richness to meaning to the understanding of architecture.Keywords: beauty, design, facade, modernism
Procedia PDF Downloads 118155 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience
Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi
Abstract:
Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit
Procedia PDF Downloads 129154 Learning in Multicultural Workspaces: A Case of Aged Care
Authors: Robert John Godby
Abstract:
To be responsive now and in the future, workplaces must address the demands of multicultural teams as they become more common elements of the global labor force. This is especially the case for aged care due to the aging population, industry growth and migrant recruitment. This research identifies influences on and improvements for learning in these environments. Its unique contribution is to illuminate how culturally diverse workplaces can work and learn together more effectively. A mixed-methods approach was used to gather data about this topic in two phases. Firstly, the research methods included a survey of 102 aged care workers around Australia from two multi-site aged care organisations. The questionnaire elicited both quantitative and qualitative data about worker characteristics and perspectives on working and learning in aged care. Secondly, a case study of one aged care worksite was formulated drawing on worksite information and interviews with workers. A review of the literature suggests that learning in multicultural work environments is influenced by three main factors: 1) the individual workers themselves, 2) their interaction with each other and 3) the environment in which they work. There are various accounts of these three factors, how they are manifested and how they lead to a change in workers’ disposition, knowledge, or expertise when confronted with new circumstances. The study has found that a key individual factor influencing learning is cultural background. Their unique view of the world was shown to affect their approach to both their work and co-working. Interactional factors suggest that the high requirement for collaboration in aged care positively supports learning in this context; however, it can be hindered by cultural bias and spoken accent. The study also found that environmental factors, such as disruptions caused by the pandemic, were another key influence. For example, the need to wear face masks hindered the communication needed for workplace learning. This was especially challenging due to the diverse language backgrounds and abilities within the teams. Potential improvements for learning in multicultural aged care work environments were identified. These include more frequent and structured inter-peer learning (e.g. buddying), communication training (e.g. English language usage for both native and non-native speaking workers) and support for cross-cultural habitude (e.g. recognizing and adapting to cultural differences). Workplace learning in cross-cultural aged care environments is an area that is not extensively dealt with in the literature. This study addresses this gap and holds the potential to contribute practical insights to aged care and other diverse industries.Keywords: cross-cultural learning, learning in aged care, migrant learning, workplace learning
Procedia PDF Downloads 159153 Delay in Induction of Labour at Two Hospitals in Southeast Scotland: Outcomes
Authors: Bernard Ewuoso
Abstract:
Introduction: Induction of labor (IOL) usually involves the patient moving between antenatal, labor, and postnatal wards. Delay in IOL has been defined as delay in the time it takes a woman to wait for induction after her cervix is assessed to be favorable. Opinions vary on the acceptable time the patient is allowed to wait for once the cervix is adjudged ripe for induction. What has been considered a benchmark is a delay of up to 12 hours. There is evidence that delay in IOL is associated with adverse outcomes. Aim: To determine the number of women experiencing delay in induction of labor and their outcomes. Method: This audit was retrospective and observational. It included women who had induction of labor in the month of October 2023 in two hospitals. Clinical data was collected from electronic medical records into an Excel sheet for analysis. Women had cervical ripening as inpatient or outpatient. The primary objective was to determine the number of women experiencing delay in induction of labor, while the secondary objective was to outcome these women. Result: 136 women had IOL. The least percentage of data retrieved for any parameter was 80%. The mean gestational age at IOL was 278.26 days. The mean waiting time was 905.34mins. Seventy-five women had their IOL at the Royal Infirmary of Edinburgh (RIE), fifty-seven at St. John’s Hospital (SJH), and three women were transferred from RIE to SJH. The preferred method of cervical ripening was balloon closely followed by prostaglandin. Twenty-seven women did not require cervical ripening and had their process started with amniotomy. Prostaglandin was the method of choice of cervical ripening at RIE, while balloon was preferred in SJH. Of the thirty-five women found to be suitable for outpatient cervical ripening, thirteen had outpatient ripening. There was a significant increase in the number of women undergoing outpatient cervical ripening at RIE from 10.5% in April 2022 to 42.9%. The preferred method for outpatient cervical ripening at the RIE was balloon, while it was prostaglandin for SJH. These were contradictory to the preferred method of inpatient cervical ripening at both centers. The average waiting time for IOL at RIE, 1166.92mins, is more than double that of SJH, 442.93mins, and far exceed 12hours, which is the proposed benchmark. The waiting time tends to be shorter with prostaglandin. Out of the women that had outpatient cervical ripening 63.6% had to wait for more than 12hrs before being induced while it was 36.1% for women that had inpatient cervical ripening. Overall, 38.5% women waited for more than 12 hours before having their induction. A lesser proportion of the women who waited for more than 12 hours had caesarean section, assisted vaginal delivery, and postpartum hemorrhage, whereas a greater proportion had spontaneous vaginal delivery and intrapartum or postpartum infection. Conclusion: A significant number of the women included in the study experienced delay in their induction process, and this was associated with an increased occurrence of intrapartum or postpartum infection. Outpatient cervical ripening contributed to delay.Keywords: delay in induction of labor, inpatient, outpatient, intrapartum, postpartum, infection
Procedia PDF Downloads 22152 Development of a Bi-National Thyroid Cancer Clinical Quality Registry
Authors: Liane J. Ioannou, Jonathan Serpell, Joanne Dean, Cino Bendinelli, Jenny Gough, Dean Lisewski, Julie Miller, Win Meyer-Rochow, Stan Sidhu, Duncan Topliss, David Walters, John Zalcberg, Susannah Ahern
Abstract:
Background: The occurrence of thyroid cancer is increasing throughout the developed world, including Australia and New Zealand, and since the 1990s has become the fastest increasing malignancy. Following the success of a number of institutional databases that monitor outcomes after thyroid surgery, the Australian and New Zealand Endocrine Surgeons (ANZES) agreed to auspice the development of a bi-national thyroid cancer registry. Objectives: To establish a bi-national population-based clinical quality registry with the aim of monitoring and improving the quality of care provided to patients diagnosed with thyroid cancer in Australia and New Zealand. Patients and Methods: The Australian and New Zealand Thyroid Cancer Registry (ANZTCR) captures clinical data for all patients, over the age of 18 years, diagnosed with thyroid cancer, confirmed by histopathology report, that have been diagnosed, assessed or treated at a contributing hospital. Data is collected by endocrine surgeons using a web-based interface, REDCap, primarily via direct data entry. Results: A multi-disciplinary Steering Committee was formed, and with operational support from Monash University the ANZTCR was established in early 2017. The pilot phase of the registry is currently operating in Victoria, New South Wales, Queensland, Western Australia and South Australia, with over 30 sites expected to come on board across Australia and New Zealand in 2018. A modified-Delphi process was undertaken to determine the key quality indicators to be reported by the registry, and a minimum dataset was developed comprising information regarding thyroid cancer diagnosis, pathology, surgery, and 30-day follow up. Conclusion: There are very few established thyroid cancer registries internationally, yet clinical quality registries have shown valuable outcomes and patient benefits in other cancers. The establishment of the ANZTCR provides the opportunity for Australia and New Zealand to further understand the current practice in the treatment of thyroid cancer and reasons for variation in outcomes. The engagement of endocrine surgeons in supporting this initiative is crucial. While the pilot registry has a focus on early clinical outcomes, it is anticipated that future collection of longer-term outcome data particularly for patients with the poor prognostic disease will add significant further value to the registry.Keywords: thyroid cancer, clinical registry, population health, quality improvement
Procedia PDF Downloads 193151 Early Childhood Education and Learning Outcomes in Lower Primary Schools, Uganda
Authors: John Acire, Wilfred Lajul, Ogwang Tom
Abstract:
Using a qualitative research technique, this study investigates the influence of Early Childhood Education (ECE) on learning outcomes in lower primary schools in Gulu City, Uganda. The study, which is based on Vygotsky's sociocultural theory of human learning, fills gaps in the current literature on the influence of ECE on learning outcomes. The aims of the study include analyzing the state of learning outcomes, investigating ECE practices, and determining the influence of these practices on learning outcomes in lower primary schools. The findings highlight the critical significance of ECE in promoting children's overall development. Nursery education helps children improve their handwriting, reading abilities, and general cognitive development. Children who have received nursery education have improved their abilities to handle pencils, form letters, and engage in social interactions, highlighting the significance of fine motor skills and socializing. Despite the good elements, difficulties in implementing ECE practices were found, such as differences in teaching styles, financial limits, and potential weariness due to prolonged school hours. The study suggests focused interventions to improve the effectiveness of ECE practices, ensure their connection with educational goals and maximize their influence on children's development. The study's findings show that respondents agree on the importance of nursery education in supporting holistic development, socialization, language competency, and conceptual comprehension. Challenges in nursery education, such as differences in teaching techniques and insufficient resources, highlight the need for comprehensive measures to address these challenges. Furthermore, parental engagement in home learning activities was revealed as an important factor affecting early education outcomes. Children who were engaged at home performed better in lower primary, emphasizing the value of a supportive family environment. Finally, the report suggests measures to enhance parental participation, changes in teaching methods through retraining, and age-appropriate enrolment. Future studies might concentrate on the involvement of parents, ECE policy practice, and the influence of ECE teachers on lower primary school learning results. These ideas are intended to help create a more favorable learning environment by encouraging holistic development and preparing children for success in succeeding academic levels.Keywords: early childhood education, learning outcomes in lower primary schools, early childhood education practices, how ECE practices influence learning outcomes in lower primary schools
Procedia PDF Downloads 42150 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review
Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed
Abstract:
Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy
Procedia PDF Downloads 132149 Impacts of Climate Change and Natural Gas Operations on the Hydrology of Northeastern BC, Canada: Quantifying the Water Budget for Coles Lake
Authors: Sina Abadzadesahraei, Stephen Déry, John Rex
Abstract:
Climate research has repeatedly identified strong associations between anthropogenic emissions of ‘greenhouses gases’ and observed increases of global mean surface air temperature over the past century. Studies have also demonstrated that the degree of warming varies regionally. Canada is not exempt from this situation, and evidence is mounting that climate change is beginning to cause diverse impacts in both environmental and socio-economic spheres of interest. For example, northeastern British Columbia (BC), whose climate is controlled by a combination of maritime, continental and arctic influences, is warming at a greater rate than the remainder of the province. There are indications that these changing conditions are already leading to shifting patterns in the region’s hydrological cycle, and thus its available water resources. Coincident with these changes, northeastern BC is undergoing rapid development for oil and gas extraction: This depends largely on subsurface hydraulic fracturing (‘fracking’), which uses enormous amounts of freshwater. While this industrial activity has made substantial contributions to regional and provincial economies, it is important to ensure that sufficient and sustainable water supplies are available for all those dependent on the resource, including ecological systems. In this turn demands a comprehensive understanding of how water in all its forms interacts with landscapes, the atmosphere, and of the potential impacts of changing climatic conditions on these processes. The aim of this study is therefore to characterize and quantify all components of the water budget in the small watershed of Coles Lake (141.8 km², 100 km north of Fort Nelson, BC), through a combination of field observations and numerical modelling. Baseline information will aid the assessment of the sustainability of current and future plans for freshwater extraction by the oil and gas industry, and will help to maintain the precarious balance between economic and environmental well-being. This project is a perfect example of interdisciplinary research, in that it not only examines the hydrology of the region but also investigates how natural gas operations and growth can affect water resources. Therefore, a fruitful collaboration between academia, government and industry has been established to fulfill the objectives of this research in a meaningful manner. This project aims to provide numerous benefits to BC communities. Further, the outcome and detailed information of this research can be a huge asset to researchers examining the effect of climate change on water resources worldwide.Keywords: northeastern British Columbia, water resources, climate change, oil and gas extraction
Procedia PDF Downloads 264148 Analysis of Digital Transformation in Banking: The Hungarian Case
Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi
Abstract:
The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.Keywords: big data, digital transformation, dynamic capabilities, mobile banking
Procedia PDF Downloads 64147 The Mitigation of Quercetin on Lead-Induced Neuroinflammation in a Rat Model: Changes in Neuroinflammatory Markers and Memory
Authors: Iliyasu Musa Omoyine, Musa Sunday Abraham, Oladele Sunday Blessing, Iliya Ibrahim Abdullahi, Ibegbu Augustine Oseloka, Nuhu Nana-Hawau, Animoku Abdulrazaq Amoto, Yusuf Abdullateef Onoruoiza, Sambo Sohnap James, Akpulu Steven Peter, Ajayi Abayomi
Abstract:
The neuroprotective role of inflammation from detrimental intrinsic and extrinsic factors has been reported. However, the overactivation of astrocytes and microglia due to lead toxicity produce excessive pro-inflammatory cytokines, mediating neurodegenerative diseases. The present study investigated the mitigatory effects of quercetin on neuroinflammation, correlating with memory function in lead-exposed rats. In this study, Wistar rats were administered orally with Quercetin (Q: 60 mg/kg) and Succimer as a standard drug (S: 10 mg/kg) for 21 days after lead exposure (Pb: 125 mg/kg) of 21 days or in combination with Pb, once daily for 42 days. Working and reference memory was assessed using an Eight-arm radial water maze (8-ARWM). The changes in brain lead level, the neuronal nitric oxide synthase (nNOS) activity, and the level of neuroinflammatory markers such as tumour necrosis factor-alpha (TNF-α) and Interleukin 1 Beta (IL-1β) were determined. Immunohistochemically, astrocyte expression was evaluated. The results showed that the brain level of lead was increased significantly in lead-exposed rats. The expression of astrocytes increased in the CA3 and CA1 regions of the hippocampus, and the levels of brain TNF-α and IL-1β increased in lead-exposed rats. Lead impaired reference and working memory by increasing reference memory errors and working memory incorrect errors in lead-exposed rats. However, quercetin treatment effectively improved memory and inhibited neuroinflammation by reducing astrocytes’ expression and the levels of TNF-α and IL-1β. The expression of astrocytes and the levels of TNF-α and IL-1β correlated with memory function. The possible explanation for quercetin’s anti-neuroinflammatory effect is that it modulates the activity of cellular proteins involved in the inflammatory response; inhibits the transcription factor of nuclear factor-kappa B (NF-κB), which regulates the expression of proinflammatory molecules; inhibits kinases required for the synthesis of Glial fibrillary acidic protein (GFAP) and modifies the phosphorylation of some proteins, which affect the structure and function of intermediate filament proteins; and, lastly, induces Cyclic-AMP Response Element Binding (CREB) activation and neurogenesis as a compensatory mechanism for memory deficits and neuronal cell death. In conclusion, the levels of neuroinflammatory markers negatively correlated with memory function. Thus, quercetin may be a promising therapy in neuroinflammation and memory dysfunction in populations prone to lead exposure.Keywords: lead, quercetin, neuroinflammation, memory
Procedia PDF Downloads 54146 Detection of Glyphosate Using Disposable Sensors for Fast, Inexpensive and Reliable Measurements by Electrochemical Technique
Authors: Jafar S. Noori, Jan Romano-deGea, Maria Dimaki, John Mortensen, Winnie E. Svendsen
Abstract:
Pesticides have been intensively used in agriculture to control weeds, insects, fungi, and pest. One of the most commonly used pesticides is glyphosate. Glyphosate has the ability to attach to the soil colloids and degraded by the soil microorganisms. As glyphosate led to the appearance of resistant species, the pesticide was used more intensively. As a consequence of the heavy use of glyphosate, residues of this compound are increasingly observed in food and water. Recent studies reported a direct link between glyphosate and chronic effects such as teratogenic, tumorigenic and hepatorenal effects although the exposure was below the lowest regulatory limit. Today, pesticides are detected in water by complicated and costly manual procedures conducted by highly skilled personnel. It can take up to several days to get an answer regarding the pesticide content in water. An alternative to this demanding procedure is offered by electrochemical measuring techniques. Electrochemistry is an emerging technology that has the potential of identifying and quantifying several compounds in few minutes. It is currently not possible to detect glyphosate directly in water samples, and intensive research is underway to enable direct selective and quantitative detection of glyphosate in water. This study focuses on developing and modifying a sensor chip that has the ability to selectively measure glyphosate and minimize the signal interference from other compounds. The sensor is a silicon-based chip that is fabricated in a cleanroom facility with dimensions of 10×20 mm. The chip is comprised of a three-electrode configuration. The deposited electrodes consist of a 20 nm layer chromium and 200 nm gold. The working electrode is 4 mm in diameter. The working electrodes are modified by creating molecularly imprinted polymers (MIP) using electrodeposition technique that allows the chip to selectively measure glyphosate at low concentrations. The modification included using gold nanoparticles with a diameter of 10 nm functionalized with 4-aminothiophenol. This configuration allows the nanoparticles to bind to the working electrode surface and create the template for the glyphosate. The chip was modified using electrodeposition technique. An initial potential for the identification of glyphosate was estimated to be around -0.2 V. The developed sensor was used on 6 different concentrations and it was able to detect glyphosate down to 0.5 mgL⁻¹. This value is below the accepted pesticide limit of 0.7 mgL⁻¹ set by the US regulation. The current focus is to optimize the functionalizing procedure in order to achieve glyphosate detection at the EU regulatory limit of 0.1 µgL⁻¹. To the best of our knowledge, this is the first attempt to modify miniaturized sensor electrodes with functionalized nanoparticles for glyphosate detection.Keywords: pesticides, glyphosate, rapid, detection, modified, sensor
Procedia PDF Downloads 177145 Magnetron Sputtered Thin-Film Catalysts with Low Noble Metal Content for Proton Exchange Membrane Water Electrolysis
Authors: Peter Kus, Anna Ostroverkh, Yurii Yakovlev, Yevheniia Lobko, Roman Fiala, Ivan Khalakhan, Vladimir Matolin
Abstract:
Hydrogen economy is a concept of low-emission society which harvests most of its energy from renewable sources (e.g., wind and solar) and in case of overproduction, electrochemically turns the excess amount into hydrogen, which serves as an energy carrier. Proton exchange membrane water electrolyzers (PEMWE) are the backbone of this concept. By fast-response electricity to hydrogen conversion, the PEMWEs will not only stabilize the electrical grid but also provide high-purity hydrogen for variety of fuel cell powered devices, ranging from consumer electronics to vehicles. Wider commercialization of PEMWE technology is however hindered by high prices of noble metals which are necessary for catalyzing the redox reactions within the cell. Namely, platinum for hydrogen evolution reaction (HER), running on cathode, and iridium for oxygen evolution reaction (OER) on anode. Possible way of how to lower the loading of Pt and Ir is by using conductive high-surface nanostructures as catalyst supports in conjunction with thin-film catalyst deposition. The presented study discusses unconventional technique of membrane electron assembly (MEA) preparation. Noble metal catalysts (Pt and Ir) were magnetron sputtered in very low loadings onto the surface of porous sublayers (located on gas diffusion layer or directly on membrane), forming so to say localized three-phase boundary. Ultrasonically sprayed corrosion resistant TiC-based sublayer was used as a support material on anode, whereas magnetron sputtered nanostructured etched nitrogenated carbon (CNx) served the same role on cathode. By using this configuration, we were able to significantly decrease the amount of noble metals (to thickness of just tens of nanometers), while keeping the performance comparable to that of average state-of-the-art catalysts. Complex characterization of prepared supported catalysts includes in-cell performance and durability tests, electrochemical impedance spectroscopy (EIS) as well as scanning electron microscopy (SEM) imaging and X-ray photoelectron spectroscopy (XPS) analysis. Our research proves that magnetron sputtering is a suitable method for thin-film deposition of electrocatalysts. Tested set-up of thin-film supported anode and cathode catalysts with combined loading of just 120 ug.cm⁻² yields remarkable values of specific current. Described approach of thin-film low-loading catalyst deposition might be relevant when noble metal reduction is the topmost priority.Keywords: hydrogen economy, low-loading catalyst, magnetron sputtering, proton exchange membrane water electrolyzer
Procedia PDF Downloads 163144 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 378143 Flow Links Curiosity and Creativity: The Mediating Role of Flow
Authors: Nicola S. Schutte, John M. Malouff
Abstract:
Introduction: Curiosity is a positive emotion and motivational state that consists of the desire to know. Curiosity consists of several related dimensions, including a desire for exploration, deprivation sensitivity, and stress tolerance. Creativity involves generating novel and valuable ideas or products. How curiosity may prompt greater creativity remains to be investigated. The phenomena of flow may link curiosity and creativity. Flow is characterized by intense concentration and absorption and gives rise to optimal performance. Objective of Study: The objective of the present study was to investigate whether the phenomenon of flow may link curiosity with creativity. Methods and Design: Fifty-seven individuals from Australia (45 women and 12 men, mean age of 35.33, SD=9.4) participated. Participants were asked to design a program encouraging residents in a local community to conserve water and to record the elements of their program in writing. Participants were then asked to rate their experience as they developed and wrote about their program. Participants rated their experience on the Dimensional Curiosity Measure sub-scales assessing the exploration, deprivation sensitivity, and stress tolerance facets of curiosity, and the Flow Short Scale. Reliability of the measures as assessed by Cronbach's alpha was as follows: Exploration Curiosity =.92, Deprivation Sensitivity Curiosity =.66, Stress Tolerance Curiosity =.93, and Flow=.96. Two raters independently coded each participant’s water conservation program description on creativity. The mixed-model intraclass correlation coefficient for the two sets of ratings was .73. The mean of the two ratings produced the final creativity score for each participant. Results: During the experience of designing the program, all three types of curiosity were significantly associated with the flow. Pearson r correlations were as follows: Exploration Curiosity and flow, r =.68 (higher Exploration Curiosity was associated with more flow); Deprivation Sensitivity Curiosity and flow, r =.39 (higher Deprivation Sensitivity Curiosity was associated with more flow); and Stress Tolerance Curiosity and flow, r = .44 (more stress tolerance in relation to novelty and exploration was associated with more flow). Greater experience of flow was significantly associated with greater creativity in designing the water conservation program, r =.39. The associations between dimensions of curiosity and creativity did not reach significance. Even though the direct relationships between dimensions of curiosity and creativity were not significant, indirect relationships through the mediating effect of the experience of flow between dimensions of curiosity and creativity were significant. Mediation analysis using PROCESS showed that flow linked Exploration Curiosity with creativity, standardized beta=.23, 95%CI [.02,.25] for the indirect effect; Deprivation Sensitivity Curiosity with creativity, standardized beta=.14, 95%CI [.04,.29] for the indirect effect; and Stress Tolerance Curiosity with creativity, standardized beta=.13, 95%CI [.02,.27] for the indirect effect. Conclusions: When engaging in an activity, higher levels of curiosity are associated with greater flow. More flow is associated with higher levels of creativity. Programs intended to increase flow or creativity might build on these findings and also explore causal relationships.Keywords: creativity, curiosity, flow, motivation
Procedia PDF Downloads 183142 An Unified Model for Longshore Sediment Transport Rate Estimation
Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza
Abstract:
Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone
Procedia PDF Downloads 387141 Women’s Perceptions of DMPA-SC Self-Injection in Malawi
Authors: Mandayachepa C. Nyando, Lauren Suchman, Innocencia Mtalimanja, Address Malata, Tamanda Jumbe, Martha Kamanga, Peter Waiswa
Abstract:
Background: Subcutaneous depot medroxyprogesterone acetate (DMPA-SC) is a new innovation in contraceptive methods that allow users to inject themselves with a hormonal contraceptive in their own homes. Self-injection (SI) of DMPA-SC has the potential to improve the accessibility of family planning to women who want it and who are capable of injecting themselves. Malawi started implementing this new innovation in 2018. SI was incorporated into the DMPA-SC delivery strategy from its outset. Methodology: This study involved two districts in Malawi where DMPA-SC SI was rolled out: Mulanje and Ntchisi. We used a qualitative cross-sectional study design where 60 in-depth interviews were conducted with women of reproductive age group stratified as 15-45 age band. These included women who were SI users, non-users, and any woman who was on any contraceptive methods. The women participants were tape-recorded, and data were transcribed and then analysed using Dedoose software, where themes were categorised into mother and child themes. Results: Women perceived DMPA SC SI as uniquely private, convenient, and less painful when self-injected. In terms of privacy, women in Mulanje and Ntchisi especially appreciated that self-injecting allowed them to use covertly from partners. Some men do not allow their spouses to use modern contraceptive methods; hence women prefer to use them covertly. “… but I first reach out to men because the strongest power is answered by men (MJ015).” In addition, women reported that SI offers privacy from family/community and less contact with healthcare providers. These aspects of privacy were especially valued in areas where there is a high degree of mistrust around family planning and among those who feel judged or antagonized purchasing contraception, such as young unmarried women. Women also valued the convenience SI provided in terms of their ability to save time by injecting themselves at home rather than visiting a healthcare provider and having more reliable access to contraception, particularly in the face of stockouts. SI allows for stocking up on doses to accommodate shifting work schedules in case of future stockouts or hard times, such as the period of COVID-19, where there was a limitation in the movement of the people. Conclusion: Our findings suggest that SI may meet the needs of many women in Malawi as long as the barriers are eliminated. The barriers women mentioned include fear of self-inject and proper storage of the DMPA SC SI, and these barriers can be eliminated by proper training. The findings also set the scene for policy revision and direction at a national level and integrate the approach with national family planning strategies in Malawi. Findings provide insights that may guide future implementation strategies, strengthen non-clinic family planning access programs and stimulate continued research.Keywords: family planning, Malawi, Sayana press, self-injection
Procedia PDF Downloads 65140 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data
Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder
Abstract:
Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods
Procedia PDF Downloads 253139 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 198138 Pressure-Robust Approximation for the Rotational Fluid Flow Problems
Authors: Medine Demir, Volker John
Abstract:
Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces
Procedia PDF Downloads 67137 Fast Detection of Local Fiber Shifts by X-Ray Scattering
Authors: Peter Modregger, Özgül Öztürk
Abstract:
Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination
Procedia PDF Downloads 63136 Evaluation of Electrophoretic and Electrospray Deposition Methods for Preparing Graphene and Activated Carbon Modified Nano-Fibre Electrodes for Hydrogen/Vanadium Flow Batteries and Supercapacitors
Authors: Barun Chakrabarti, Evangelos Kalamaras, Vladimir Yufit, Xinhua Liu, Billy Wu, Nigel Brandon, C. T. John Low
Abstract:
In this work, we perform electrophoretic deposition of activated carbon on a number of substrates to prepare symmetrical coin cells for supercapacitor applications. From several recipes that involve the evaluation of a few solvents such as isopropyl alcohol, N-Methyl-2-pyrrolidone (NMP), or acetone to binders such as polyvinylidene fluoride (PVDF) and charging agents such as magnesium chloride, we display a working means for achieving supercapacitors that can achieve 100 F/g in a consistent manner. We then adapt this EPD method to deposit reduced graphene oxide on SGL 10AA carbon paper to achieve cathodic materials for testing in a hydrogen/vanadium flow battery. In addition, a self-supported hierarchical carbon nano-fibre is prepared by means of electrospray deposition of an iron phthalocyanine solution onto a temporary substrate followed by carbonisation to remove heteroatoms. This process also induces a degree of nitrogen doping on the carbon nano-fibres (CNFs), which allows its catalytic performance to improve significantly as detailed in other publications. The CNFs are then used as catalysts by attaching them to graphite felt electrodes facing the membrane inside an all-vanadium flow battery (Scribner cell using serpentine flow distribution channels) and efficiencies as high as 60% is noted at high current densities of 150 mA/cm². About 20 charge and discharge cycling show that the CNF catalysts consistently perform better than pristine graphite felt electrodes. Following this, we also test the CNF as an electro-catalyst in the hydrogen/vanadium flow battery (cathodic side as mentioned briefly in the first paragraph) facing the membrane, based upon past studies from our group. Once again, we note consistently good efficiencies of 85% and above for CNF modified graphite felt electrodes in comparison to 60% for pristine felts at low current density of 50 mA/cm² (this reports 20 charge and discharge cycles of the battery). From this preliminary investigation, we conclude that the CNFs may be used as catalysts for other systems such as vanadium/manganese, manganese/manganese and manganese/hydrogen flow batteries in the future. We are generating data for such systems at present, and further publications are expected.Keywords: electrospinning, carbon nano-fibres, all-vanadium redox flow battery, hydrogen-vanadium fuel cell, electrocatalysis
Procedia PDF Downloads 291