Search results for: full- service schools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7566

Search results for: full- service schools

426 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 174
425 Testing of Canadian Integrated Healthcare and Social Services Initiatives with an Evidence-Based Case Definition for Healthcare and Social Services Integrations

Authors: S. Cheng, C. Catallo

Abstract:

Introduction: Canada's healthcare and social services systems are failing high risk, vulnerable older adults. Care for vulnerable older Canadians (65 and older) is not optimal in Canada. It does not address the care needs of vulnerable, high risk adults using a holistic approach. Given the growing aging population, and the care needs for seniors with complex conditions is one of the highest in Canada's health care system, there is a sense of urgency to optimize care. Integration of health and social services is an emerging trend in Canada when compared to European countries. There is no common and universal understanding of healthcare and social services integration within the country. Consequently, a clear understanding and definition of integrated health and social services are absent in Canada. Objectives: A study was undertaken to develop a case definition for integrated health and social care initiatives that serve older adults, which was then tested against three Canadian integrated initiatives. Methodology: A limited literature review was undertaken to identify common characteristics of integrated health and social care initiatives that serve older adults, and comprised both scientific and grey literature, in order to develop a case definition. Three Canadian integrated initiatives that are located in the province of Ontario, were identified using an online search and a screening process. They were surveyed to determine if the literature-based integration definition applied to them. Results: The literature showed that there were 24 common healthcare and social services integration characteristics that could be categorized into ten themes: 1) patient-care approach; 2) program goals; 3) measurement; 4) service and care quality; 5) accountability and responsibility; 6) information sharing; 7) Decision-making and problem-solving; 8) culture; 9) leadership; and 10) staff and professional interaction. The three initiatives showed agreement on all the integration characteristics except for those characteristics associated with healthcare and social care professional interaction, collaborative leadership and shared culture. This disagreement may be due to several reasons, including the existing governance divide between the healthcare and social services sectors within the province of Ontario that has created a ripple effect in how professions in the two different sectors interact. In addition, the three initiatives may be at maturing levels of integration, which may explain disagreement on the characteristics associated with leadership and culture. Conclusions: The development of a case definition for healthcare and social services integration that incorporates common integration characteristics can act as a useful instrument in identifying integrated healthcare and social services, particularly given the emerging and evolutionary state of this phenomenon within Canada.

Keywords: Canada, case definition, healthcare and social services integration, integration, seniors health, services delivery

Procedia PDF Downloads 155
424 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 78
423 Method of Nursing Education: History Review

Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán

Abstract:

Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.

Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education

Procedia PDF Downloads 113
422 Impact of Weather Conditions on Non-Food Retailers and Implications for Marketing Activities

Authors: Noriyuki Suyama

Abstract:

This paper discusses purchasing behavior in retail stores, with a particular focus on the impact of weather changes on customers' purchasing behavior. Weather conditions are one of the factors that greatly affect the management and operation of retail stores. However, there is very little research on the relationship between weather conditions and marketing from an academic perspective, although there is some importance from a practical standpoint and knowledge based on experience. For example, customers are more hesitant to go out when it rains than when it is sunny, and they may postpone purchases or buy only the minimum necessary items even if they do go out. It is not difficult to imagine that weather has a significant impact on consumer behavior. To the best of the authors' knowledge, there have been only a few studies that have delved into the purchasing behavior of individual customers. According to Hirata (2018), the economic impact of weather in the United States is estimated to be 3.4% of GDP, or "$485 billion ± $240 billion per year. However, weather data is not yet fully utilized. Representative industries include transportation-related industries (e.g., airlines, shipping, roads, railroads), leisure-related industries (e.g., leisure facilities, event organizers), energy and infrastructure-related industries (e.g., construction, factories, electricity and gas), agriculture-related industries (e.g., agricultural organizations, producers), and retail-related industries (e.g., retail, food service, convenience stores, etc.). This paper focuses on the retail industry and advances research on weather. The first reason is that, as far as the author has investigated the retail industry, only grocery retailers use temperature, rainfall, wind, weather, and humidity as parameters for their products, and there are very few examples of academic use in other retail industries. Second, according to NBL's "Toward Data Utilization Starting from Consumer Contact Points in the Retail Industry," labor productivity in the retail industry is very low compared to other industries. According to Hirata (2018) mentioned above, improving labor productivity in the retail industry is recognized as a major challenge. On the other hand, according to the "Survey and Research on Measurement Methods for Information Distribution and Accumulation (2013)" by the Ministry of Internal Affairs and Communications, the amount of data accumulated by each industry is extremely large in the retail industry, so new applications are expected by analyzing these data together with weather data. Third, there is currently a wealth of weather-related information available. There are, for example, companies such as WeatherNews, Inc. that make weather information their business and not only disseminate weather information but also disseminate information that supports businesses in various industries. Despite the wide range of influences that weather has on business, the impact of weather has not been a subject of research in the retail industry, where business models need to be imagined, especially from a micro perspective. In this paper, the author discuss the important aspects of the impact of weather on marketing strategies in the non-food retail industry.

Keywords: consumer behavior, weather marketing, marketing science, big data, retail marketing

Procedia PDF Downloads 81
421 Brand Resonance Strategy For Long-term Market Survival: Does The Brand Resonance Matter For Smes? An Investigation In Smes Digital Branding (Facebook, Twitter, Instagram And Blog) Activities And Strong Brand Development

Authors: Noor Hasmini Abd Ghani

Abstract:

Brand resonance is among of new focused strategy that getting more attention in nowadays by larger companies for their long-term market survival. The brand resonance emphasizing of two main characteristics that are intensity and activity able to generate psychology bond and enduring relationship between a brand and consumer. This strong attachment relationship has represented brand resonance with the concept of consumer brand relationship (CBR) that exhibit competitive advantage for long-term market survival. The main consideration toward this brand resonance approach is not only in the context of larger companies but also can be adapted in Small and Medium Enterprises (SMEs) as well. The SMEs have been recognized as vital pillar to the world economy in both developed and emergence countries are undeniable due to their economic growth contributions, such as opportunity for employment, wealth creation, and poverty reduction. In particular, the facts that SMEs in Malaysia are pivotal to the well-being of the Malaysian economy and society are clearly justified, where the SMEs competent in provided jobs to 66% of the workforce and contributed 40% to the GDP. As regards to it several sectors, the SMEs service category that covers the Food & Beverage (F&B) sector is one of the high-potential industries in Malaysia. For that reasons, SMEs strong brand or brand equity is vital to be developed for their long-term market survival. However, there’s still less appropriate strategies in develop their brand equity. The difficulties have never been so evident until Covid-19 swept across the globe from 2020. Since the pandemic began, more than 150,000 SMEs in Malaysia have shut down, leaving more than 1.2 million people jobless. Otherwise, as the SMEs are the pillar of any economy for the countries in the world, and with negative effect of COVID-19 toward their economic growth, thus, their protection has become important more than ever. Therefore, focusing on strategy that able to develop SMEs strong brand is compulsory. Hence, this is where the strategy of brand resonance is introduced in this study. Mainly, this study aims to investigate the impact of CBR as a predictor and mediator in the context of social media marketing (SMM) activities toward SMEs e-brand equity (or strong brand) building. The study employed the quantitative research design concerning on electronic survey method with the valid response rate of 300 respondents. Interestingly, the result revealed the importance role of CBR either as predictor or mediator in the context of SMEs SMM as well as brand equity development. Further, the study provided several theoretical and practical implications that can benefit the SMEs in enhancing their strategic marketing decision.

Keywords: SME brand equity, SME social media marketing, SME consumer brand relationship, SME brand resonance

Procedia PDF Downloads 60
420 Schoolwide Implementation of Schema-Based Instruction for Mathematical Problem Solving: An Action Research Investigation

Authors: Sara J. Mills, Sally Howell

Abstract:

The field of special education has long struggled to bridge the research to practice gap. There is ample evidence from research of effective strategies for students with special needs, but these strategies are not routinely implemented in schools in ways that yield positive results for students. In recent years, the field of special education has turned its focus to implementation science. That is, discovering effective methods of implementing evidence-based practices in school settings. Teacher training is a critical factor in implementation. This study aimed to successfully implement Schema-Based Instruction (SBI) for math problem solving in four classrooms in a special primary school serving students with language deficits, including students with Autism Spectrum Disorders (ASD) and Intellectual Disabilities (ID). Using an action research design that allowed for adjustments and modification to be made over the year-long study, two cohorts of teachers across the school were trained and supported in six-week learning cycles to implement SBI in their classrooms. The learning cycles included a one-day training followed by six weeks of one-on-one or team coaching and three fortnightly cohort group meetings. After the first cohort of teachers completed the learning cycle, modifications and adjustments were made to lesson materials in an attempt to improve their effectiveness with the second cohort. Fourteen teachers participated in the study, including master special educators (n=3), special education instructors (n=5), and classroom assistants (n=6). Thirty-one students participated in the study (21 boys and 10 girls), ranging in age from 5 to 12 years (M = 9 years). Twenty-one students had a diagnosis of ASD, 20 had a diagnosis of mild or moderate ID, with 13 of these students having both ASD and ID. The remaining students had diagnosed language disorders. To evaluate the effectiveness of the implementation approach, both student and teacher data was collected. Student data included pre- and post-tests of math word problem solving. Teacher data included fidelity of treatment checklists and pre-post surveys of teacher attitudes and efficacy for teaching problem solving. Finally, artifacts were collected throughout the learning cycle. Results from cohort 1 and cohort 2 revealed similar outcomes. Students improved in the number of word problems they answered correctly and in the number of problem-solving steps completed independently. Fidelity of treatment data showed that teachers implemented SBI with acceptable levels of fidelity (M = 86%). Teachers also reported increases in the amount of time spent teaching problem solving, their confidence in teaching problem solving and their perception of students’ ability to solve math word problems. The artifacts collected during instruction indicated that teachers made modifications to allow their students to access the materials and to show what they knew. These findings are in line with research that shows student learning can improve when teacher professional development is provided over an extended period of time, actively involves teachers, and utilizes a variety of learning methods in classroom contexts. Further research is needed to evaluate whether these gains in teacher instruction and student achievement can be maintained over time once the professional development is completed.

Keywords: implementation science, mathematics problem solving, research-to-practice gap, schema based instruction

Procedia PDF Downloads 125
419 The Lifecycle of a Heritage Language: A Comparative Case Study of Volga German Descendants in North America

Authors: Ashleigh Dawn Moeller

Abstract:

This is a comparative case study which examines the language attitudes and behaviors of descendants of Volga German immigrants in North America and how these attitudes combined with surrounding social conditions have caused their heritage language to develop differently within each community. Of particular interest for this study are the accounts of second- and third-generation descendants in Oregon, Kansas, and North Dakota regarding their parents’ and grandparents’ attitudes toward their language and how this correlates with the current sentiment as well as visibility of their heritage language and culture. This study discusses the point at which cultural identity could diverge from language identity and what elements play a role in this development, establishing the potential for environments (linguistic landscapes) which uphold their heritage yet have detached from the language itself. Emigrating from Germany in the 1700s, these families settled for over a hundred years along the Volga Region of Imperial Russia. Subsequently, many descendants of these settlers immigrated to the Americas in the 1800-1900s. Identifying neither as German nor Russian, they called themselves Wolgadeutche (Volga Germans). During their time in Russia, the German language was maintained relatively homogenously, yet the use and status of their heritage language diverged considerably upon settlement across the Americas. Data shows that specific conditions, such as community isolation, size, religion, location as well as language policy established prior to and following the Volga German immigration to North America have had a substantial impact on the maintenance of their heritage language—causing complete loss in some areas and peripheral use or even full rebirth in others. These past conditions combined with the family accounts correlate directly with the general attitudes and ideologies of the descendants toward their heritage language. Data also shows that in many locations, despite a strong presence of German within the linguistic landscape, minimal to no German is spoken nor understood; the attitude toward the language is indifferent while a staunch holding to the heritage is maintained and boasted. Data for this study was gathered from historical accounts, archived records and newspapers, and published biographies as well as from formal interviews with second- and third-generation descendants of Volga German immigrants conducted in Oregon and Kansas. Through the interviews, members of the community have shared and provided their family genealogies as well as biographies published by family members. These have helped to trace their relatives back to specific locations, thus allowing for comparisons within the same families residing in distinctly different areas of North America. This study is part of a larger ongoing project which researches the immigration of Volga and Black Sea Germans to North America and diachronically examines the over-arching sociological factors which have directly impacted the maintenance, loss, or rebirth of their heritage language. This project follows specific families who settled in areas of Colorado, Kansas, Nebraska, Illinois, Minnesota, North and South Dakota, Saskatchewan, and Manitoba, and who later had relatives move west to areas of Oregon and Washington State. Interviews for the larger project will continue into the following year.

Keywords: heritage language, immigrant language, language change, language contact, linguistic landscape, Volga Germans, Wolgadeutsche

Procedia PDF Downloads 121
418 Dietetics Practice in the Scope of Disease Prevention in Community Settings: A School-Based Obesity Prevention Program

Authors: Elham Abbas Aljaaly, Nahlaa Abdulwahab Khalifa

Abstract:

The active method of disease prevention is seen as the most affordable and sustainable action to deal with risks of non-communicable diseases such as obesity. This eight-week project aimed to pilot the feasibility and acceptability of a school-based programme, which is proposed to prevent and modify overweight status and possible related risk factors among student girls 'at the intermediate level' in Jeddah city. The programme was conducted through comprehensible approach targeting physical environment and school policies (nutritional/exercise/behavioural approach). The programme was designed to cultivate the personal and environmental awareness in schools for girls. This was applied by promoting healthy eating and physical activity through policies, physical education, healthier options for school canteens, and the creation of school health teams. The prevention programme was applied on 68 students (who agreed to participate) from grades 7th, 8th and 9th. A pre and post assessment questionnaire was employed on 66 students. The questionnaires were designed to obtain information on students' knowledge about health, nutrition and physical activity. Survey questions included information about nutrients, food consumption patterns, food intake and lifestyle. Physical education included training sessions for new opportunities for physical activities to be performed during school or after school hours. A running competition 'to enhance students’ performance for physical activities' was also conducted during the school visit. A visit to the school canteen was conducted to check, observe, record and assess all available food/beverage items and meals. The assessment method was a subjective method for the type of food/beverages if high in saturated fat, salt and sugar (HFSS) or non-HFSS. The school canteen administrators were encouraged to provide healthy food/beverage items and a sample healthy canteen was provided for implementation. Two healthy options were introduced to the school canteen. A follow up for students’ preferences for the introduced options and the purchasing power were assessed. Thirty-eight percent of young girls (n=26) were not participating in any form of physical activities inside or outside school. Skipping breakfast was stated by 42% (n=28) of students with no daily consumption (19%, n=13) for fruit/vegetables. Significant changes were noticed in students’ (n=66) overall responses to the pre and post questions (P value=.001). All students had participated in the conducted running competition sessions and reported satisfaction and enjoyment about the sessions. No absence was reported by the research team for attending physical education and activity sessions throughout the delivered programme. The purchasing power of the introduced healthy options of 'Salad and oatmeal' was increased to 18% in 8 weeks at the school canteen, and slightly affected the purchase for other less healthy options. The piloted programme indorsed better health and nutrition knowledge, healthy eating and lifestyle attitude, which could help young girls to obtain sustainable changes. It is expected that the outcomes of the programme will be a cornerstone for the futuristic national study that will assist policy makers and participants to build a knowledgeable health promotion scenario and make sure that school students have access to healthy foods, physical exercise and healthy lifestyle.

Keywords: adolescents, diet, exercise, behaviours, overweight/obesity, prevention-intervention programme, Saudi Arabia, schoolgirls

Procedia PDF Downloads 129
417 The Relationship between the Competence Perception of Student and Graduate Nurses and Their Autonomy and Critical Thinking Disposition

Authors: Zülfiye Bıkmaz, Aytolan Yıldırım

Abstract:

This study was planned as a descriptive regressive study in order to determine the relationship between the competency levels of working nurses, the levels of competency expected by nursing students, the critical thinking disposition of nurses, their perceived autonomy levels, and certain socio demographic characteristics. It is also a methodological study with regard to the intercultural adaptation of the Nursing Competence Scale (NCS) in both working and student samples. The sample of the study group of nurses at a university hospital for at least 6 months working properly and consists of 443 people filled out questionnaires. The student group, consisting of 543 individuals from the 4 public university nursing 3rd and 4th grade students. Data collection tools consisted of a questionnaire prepared in order to define the socio demographic, economic, and personal characteristics of the participants, the ‘Nursing Competency Scale’, the ‘Autonomy Subscale of the Sociotropy – Autonomy Scale’, and the ‘California Critical Thinking Disposition Inventory’. In data evaluation, descriptive statistics, nonparametric tests, Rasch analysis and correlation and regression tests were used. The language validity of the ‘NCS’ was performed by translation and back translation, and the context validity of the scale was performed with expert views. The scale, which was formed into its final structure, was applied in a pilot application from a group consisting of graduate and student nurses. The time constancy of the test was obtained by analysis testing retesting method. In order to reduce the time problems with the two half reliability method was used. The Cronbach Alfa coefficient of the scale was found to be 0.980 for the nurse group and 0.986 for the student group. Statistically meaningful relationships between competence and critical thinking and variables such as age, gender, marital status, family structure, having had critical thinking training, education level, class of the students, service worked in, employment style and position, and employment duration were found. Statistically meaningful relationships between autonomy and certain variables of the student group such as year, employment status, decision making style regarding self, total duration of employment, employment style, and education status were found. As a result, it was determined that the NCS which was adapted interculturally was a valid and reliable measurement tool and was found to be associated with autonomy and critical thinking.

Keywords: nurse, nursing student, competence, autonomy, critical thinking, Rasch analysis

Procedia PDF Downloads 393
416 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 108
415 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 227
414 A Model for a Continuous Professional Development Program for Early Childhood Teachers in Villages: Insights from the Coaching Pilot in Indonesia

Authors: Ellen Patricia, Marilou Hyson

Abstract:

Coaching has been showing great potential to strengthen the impact of brief group trainings and help early childhood teachers solve specific problems at work with the goal of raising the quality of early childhood services. However, there have been some doubts about the benefits that village teachers can receive from coaching. It is perceived that village teachers may struggle with the thinking skills needed to make coaching beneficial. Furthermore, there are reservations about whether principals and supervisors in villages are open to coaching’s facilitative approach, as opposed to the directive approach they have been using. As such, the use of coaching to develop the professionalism of early childhood teachers in the villages needs to be examined. The Coaching Pilot for early childhood teachers in Indonesia villages provides insights for the above issues. The Coaching Pilot is part of the ECED Frontline Pilot, which is a collaboration project between the Government of Indonesia and the World Bank with the support from the Australian Government (DFAT). The Pilot started with coordinated efforts with the local government in two districts to select principals and supervisors who have been equipped with basic knowledge about early childhood education to take part in 2-days coaching training. Afterwards, the participants were asked to collect 25 hours of coaching early childhood teachers who have participated in the Enhanced Basic Training for village teachers. The participants who completed this requirement were then invited to come for an assessment of their coaching skills. Following that, a qualitative evaluation was conducted using in-depth interviews and Focus Group Discussion techniques. The evaluation focuses on the impact of the coaching pilot in helping the village teachers to develop in their professionalism, as well as on the sustainability of the intervention. Results from the evaluation indicated that although their low education may limit their thinking skills, village teachers benefited from the coaching that they received. Moreover, the evaluation results also suggested that with enough training and support, principals and supervisors in the villages were able to provide an adequate coaching service for the teachers. On top of that, beyond this small start, interest is growing, both within the pilot districts and even beyond, due to word of mouth of the benefits that the Coaching Pilot has created. The districts where coaching was piloted have planned to continue the coaching program, since a number of early childhood teachers have requested to be coached, and a number of principals and supervisors have also requested to be trained as a coach. Furthermore, the Association for Early Childhood Educators in Indonesia has started to adopt coaching into their program. Although further research is needed, the Coaching Pilot suggests that coaching can positively impact early childhood teachers in villages, and village principals and supervisors can become a promising source of future coaches. As such, coaching has a significant potential to become a sustainable model for a continuous professional development program for early childhood teachers in villages.

Keywords: coaching, coaching pilot, early childhood teachers, principals and supervisors, village teachers

Procedia PDF Downloads 240
413 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 203
412 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)

Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara

Abstract:

Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.

Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry

Procedia PDF Downloads 175
411 Conservation Challenges of Fish and Fisheries in Lake Tana, Ethiopia

Authors: Shewit Kidane, Abebe Getahun, Wassie Anteneh, Admassu Demeke, Peter Goethals

Abstract:

We have reviewed major findings of scientific studies on Lake Tana fish resources and their threats. The aim was to provide summarized information for all concerned bodies and international readers to get full and comprehensive picture about the lake’s fish resource and conservation problems. The Lake Tana watershed comprise 28 fish species, of which 21 are endemic. Moreover, Lake Tana is the one among the top 250 lake regions of global importance for biodiversity and it is world recognized migratory birds wintering site. Lake Tana together with its adjacent wetlands provide directly and indirectly a livelihood for more than 500,000 people. However, owing to anthropogenic activities, the lake ecosystem as well as fish and attributes of the fisheries sector are severely degraded. Fish species in Lake Tana are suffering due to illegal fishing, damming, habitat/breeding ground degradation, wastewater disposal, introduction of exotic species, and lack of implementing fisheries regulations. Currently, more than 98% of fishers in Lake Tana are using the most destructive monofilament. Indeed, dams, irrigation schemes and hydropower are constructed in response to the emerging development need only. Mitigation techniques such as construction of fish ladders for the migratory fishes are the most forgotten. In addition, water resource developers are likely unaware of both the importance of the fisheries and the impact of dam construction on fish. As a result, the biodiversity issue is often missed. Besides, Lake Tana wetlands, which play vital role to sustain biodiversity, are not wisely utilised in the sense of the Ramsar Convention’s definition. Wetlands are considered as unhealthy and hence wetland conversion for the purpose of recession agriculture is still seen as advanced mode of development. As a result, many wetlands in the lake watershed are shrinking drastically over time and Cyprus papyrus, one of the characteristic features of Lake Tana, has dramatically declined in its distribution with some local extinction. Furthermore, the recently introduced water hyacinth (Eichhornia crassipes) is creating immense problems on the lake ecosystem. Moreover, currently, 1.56 million tons of sediment have deposited into the lake each year and wastes from the industries and residents are directly discharged into the lake without treatment. Recently, sign of eutrophication is revealed in Lake Tana and most coarsely, the incidence of cyanobacteria genus Microcystis was reported from the Bahir Dar Gulf of Lake Tana. Thus, the direct dependency of the communities on the lake water for drinking as well as to wash their body and clothes and its fisheries make the problem worst. Indeed, since it is home to many endemic migratory fish, such kind of unregulated developmental activities could be detrimental to their stocks. This can be best illustrated by the drastic stock reduction (>75% in biomass) of the world unique Labeobarbus species. So, unless proper management is put in place, the anthropogenic impacts can jeopardize the aquatic ecosystems. Therefore, in order to sustainably use the aquatic resources and fulfil the needs of the local people, every developmental activity and resource utilization should be carried out adhering to the available policies.

Keywords: anthropogenic impacts, dams, endemic fish, wetland degradation

Procedia PDF Downloads 252
410 Healthcare Professionals' Perspectives on Warfarin Therapy at Lao-Luxembourg Heart Centre, Mahosot Hospital, Lao PDR

Authors: Vanlounni Sibounheuang, Wanarat Anusornsangiam, Pattarin Kittiboonyakun, Chanthanom Manithip

Abstract:

In worldwide, one of the most common use of oral anticoagulant is warfarin. Its margin between therapeutic inhibition of clot formation and bleeding complications is narrow. Mahosot Hospital, warfarin clinic had not been established yet. The descriptive study was conducted by investigating drug-related problems of outpatients using warfarin, the value of the international normalized ratio (INR) higher than normal ranges (25.40 % of the total 272 outpatients) were mostly identified at Lao-Luxembourg Heart Centre, Mahosot Hospital, Lao PDR. This result led to the present study conducting qualitative interviews in order to help establish a warfarin clinic at Mahosot Hospital for the better outcomes of patients using warfarin. The purpose of this study was to explore perspectives of healthcare professional providing services for outpatients using warfarin. The face to face, in-depth interviews were undertaken among nine healthcare professionals (doctor=3, nurse=3, pharmacist=3) working at out-patient clinic, Lao-Luxembourg Heart Centre, Mahosot Hospital, Lao PDR. The interview guides were developed, and they were validated by the experts in the fields of qualitative research. Each interview lasted approximately 20 minutes. Three major themes emerged; healthcare professional’s experiences of current practice problems with warfarin therapy, healthcare professionals’ views of medical problems related to patients using warfarin, and healthcare professionals’ perspectives on ways of service improvement. All healthcare professionals had the same views that it’s difficult to achieve INR goal for individual patients because of some important patient barriers especially lack of knowledge about to use warfarin properly and safety, patients not regularly follow-up due to problems with transportations and financial support. Doctors and nurses agreed to have a pharmacist running a routine warfarin clinic and provided counselling to individual patients on the following points: how to take drug properly and safety, drug-drug and food-drug interactions, common side effects and how to manage them, lifestyle modifications. From the interviews, some important components of the establishment of a warfarin clinic included financial support, increased human resources, improved the system of keeping patients’ medical records, short course training for pharmacists. This study indicated the acceptance of healthcare professionals on the important roles of pharmacists and the feasibility of setting up warfarin clinic by working together with the multidisciplinary health care team in order to help improve health outcomes of patients using warfarin at Mahosot Hospital, Lao PDR.

Keywords: perspectives, healthcare professional, warfarin therapy, Mahosot Hospital

Procedia PDF Downloads 100
409 Feedback from a Service Evaluation of a Modified Intrauterine Device Insertor: A First Step to a Changement of the Standard of Iud Insertion Procedure

Authors: Desjardin, Michaels, Martinez, Ulmann

Abstract:

Copper IUD is one of the most efficient and cost-effective contraception. However, pain at insertion hampers the use of this method. This is especially unfortunate in nulliparous women, often younger, who are excellent candidates for this contraception, including Emergency Contraception. Standard insertion procedure of a copper IUD usually involves measurement of uterine cavity with an hysterometer and the use of a tenaculum in order to facilitate device insertion. Both procedures lead to patient pain which often constitutes a limitation of the method. To overcome these issues, we have developed a modified insertor combined with a copper IUD. The singular design of the inserter includes a flexible inflatable membrane technology allowing an easy access to the uterine cavity even in case of abnormal uterine positions or narrow cervical canal. Moreover, this inserter makes possible a direct IUD insertion with no hysterometry and no need for tenaculum. To assess device effectiveness and patient-reported pain, a study was conducted at two clinics in Fance with 31 individuals who wanted to use a copper IUD as contraceptive method. IUD insertions have been performed by four healthcare providers. Operators completed questionnaire and evaluated effectiveness of the procedure (including IUD correct fundal placement and other usability questions) as their satisfaction. Patient also completed questionnaire and pain during procedure was measured on a 10-cm Visual Analogue Scale (VAS). Analysis of the questionnaires indicates that correct IUD placement took place in more than 93% of women, which is a standard efficacy rate. It also demonstrates that IUD insertion resulted in no, light or moderate pain predominantly in nulliparous women. No insertion resulted in severe pain (none above 6cm on a 10-cm VAS). This translated by a high level of satisfaction from both patients and practitioners. In addition, this modified inserter allowed a simplification of the insertion procedure: correct fundal placement was ensured with no need for hysterometry (100%) prior to insertion nor for cervical tenaculum to pull on the cervix (90%). Avoidance of both procedures contributed to the decrease in pain during insertion. Taken together, the results of the study demonstrate that this device constitutes a significant advance in the use of copper IUDs for any woman. It allows a simplification of the insertion procedure: there is no need for pre-insertion hysterometry and no need for traction on the cervix with tenaculum. Increased comfort during insertion should allow a wider use of the method for nulliparous women and for emergency contraception. In addition, pain is often underestimated by practitioners, but fear of pain is obviously one of the blocking factors as indicated by the analysis of the questionnaire. This evaluation brings interesting information on the use of this modified inserter for standard copper IUD and promising perspectives to set up a changement in the standard of IUD insertion procedure.

Keywords: contraceptio, IUD, innovation, pain

Procedia PDF Downloads 84
408 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 233
407 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans

Authors: Rene Hellmuth

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.

Keywords: building information modeling, digital factory model, factory planning, restructuring

Procedia PDF Downloads 114
406 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 176
405 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas

Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee

Abstract:

U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.

Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood

Procedia PDF Downloads 143
404 Sustainable Development and Modern Challenges of Higher Educational Institutions in the Regions of Georgia

Authors: Natia Tsiklashvili, Tamari Poladashvili

Abstract:

Education is one of the fundamental factors of economic prosperity in all respects. It is impossible to talk about the sustainable economic development of the country without substantial investments in human capital and investment into higher educational institutions. Education improves the standard of living of the population and expands the opportunities to receive more benefits, which will be equally important for both the individual and the society as a whole. There are growing initiatives among educated people such as entrepreneurship, technological development, etc. At the same time, the distribution of income between population groups is improving. The given paper discusses the scientific literature in the field of sustainable development through higher educational institutions. Scholars of economic theory emphasize a few major aspects that show the role of higher education in economic growth: a) Alongside education, human capital gradually increases which leads to increased competitiveness of the labor force, not only in the national but also in the international labor market (Neoclassical growth theory), b) The high level of education can increase the efficiency of the economy, investment in human capital, innovation, and knowledge are significant contributors to economic growth. Hence, it focuses on positive externalities and spillover effects of a knowledge-based economy which leads to economic development (endogenous growth theory), c) Education can facilitate the diffusion and transfer of knowledge. Hence, it supports macroeconomic sustainability and microeconomic conditions of individuals. While discussing the economic importance of education, we consider education as the spiritual development of the human that advances general skills, acquires a profession, and improves living conditions. Scholars agree that human capital is not only money but liquid assets, stocks, and competitive knowledge. The last one is the main lever in the context of increasing human competitiveness and high productivity. To address the local issues, the present article researched ten educational institutions across Georgia, including state and private HEIs. Qualitative research was done by analyzing in-depth interweaves of representatives from each institution, and respondents were rectors/vice-rectors/heads of quality assurance service at the institute. The result shows that there is a number of challenges that institution face in order to maintain sustainable development and be the strong links to education and the labor market. Mostly it’s contacted with bureaucracy, insufficient finances they receive, and local challenges that differ across the regions.

Keywords: higher education, higher educational institutions, sustainable development, regions, Georgia

Procedia PDF Downloads 85
403 Framework to Organize Community-Led Project-Based Learning at a Massive Scale of 900 Indian Villages

Authors: Ayesha Selwyn, Annapoorni Chandrashekar, Kumar Ashwarya, Nishant Baghel

Abstract:

Project-based learning (PBL) activities are typically implemented in technology-enabled schools by highly trained teachers. In rural India, students have limited access to technology and quality education. Implementing typical PBL activities is challenging. This study details how Pratham Education Foundation’s Hybrid Learning model was used to implement two PBL activities related to music in 900 remote Indian villages with 46,000 students aged 10-14. The activities were completed by 69% of groups that submitted a total of 15,000 videos (completed projects). Pratham’s H-Learning model reaches 100,000 students aged 3-14 in 900 Indian villages. The community-driven model engages students in 20,000 self-organized groups outside of school. The students are guided by 6,000 youth volunteers and 100 facilitators. The students partake in learning activities across subjects with the support of community stakeholders and offline digital content on shared Android tablets. A training and implementation toolkit for PBL activities is designed by subject experts. This toolkit is essential in ensuring efficient implementation of activities as facilitators aren’t highly skilled and have limited access to training resources. The toolkit details the activity at three levels of student engagement - enrollment, participation, and completion. The subject experts train project leaders and facilitators who train youth volunteers. Volunteers need to be trained on how to execute the activity and guide students. The training is focused on building the volunteers’ capacity to enable students to solve problems, rather than developing the volunteers’ subject-related knowledge. This structure ensures that continuous intervention of subject matter experts isn’t required, and the onus of judging creativity skills is put on community members. 46,000 students in the H-Learning program were engaged in two PBL activities related to Music from April-June 2019. For one activity, students had to conduct a “musical survey” in their village by designing a survey and shooting and editing a video. This activity aimed to develop students’ information retrieval, data gathering, teamwork, communication, project management, and creativity skills. It also aimed to identify talent and document local folk music. The second activity, “Pratham Idol”, was a singing competition. Students participated in performing, producing, and editing videos. This activity aimed to develop students’ teamwork and creative skills and give students a creative outlet. Students showcased their completed projects at village fairs wherein a panel of community members evaluated the videos. The shortlisted videos from all villages were further evaluated by experts who identified students and adults to participate in advanced music workshops. The H-Learning framework enables students in low resource settings to engage in PBL and develop relevant skills by leveraging community support and using video creation as a tool. In rural India, students do not have access to high-quality education or infrastructure. Therefore designing activities that can be implemented by community members after limited training is essential. The subject experts have minimal intervention once the activity is initiated, which significantly reduces the cost of implementation and allows the activity to be implemented at a massive scale.

Keywords: community supported learning, project-based learning, self-organized learning, education technology

Procedia PDF Downloads 186
402 Retrospective Demographic Analysis of Patients Lost to Follow-Up from Antiretroviral Therapy in Mulanje Mission Hospital, Malawi

Authors: Silas Webb, Joseph Hartland

Abstract:

Background: Long-term retention of patients on ART has become a major health challenge in Sub-Saharan Africa (SSA). In 2010 a systematic review of 39 papers found that 30% of patients were no longer taking their ARTs two years after starting treatment. In the same review, it was noted that there was a paucity of data as to why patients become lost to follow-up (LTFU) in SSA. This project was performed in Mulanje Mission Hospital in Malawi as part of Swindon Academy’s Global Health eSSC. The HIV prevalence for Malawi is 10.3%, one of the highest rates in the world, however prevalence soars to 18% in the Mulanje. Therefore it is essential that patients at risk of being LTFU are identified early and managed appropriately to help them continue to participate in the service. Methodology: All patients on adult antiretroviral formulations at MMH, who were classified as ‘defaulters’ (patients missing a scheduled follow up visit by more than two months) over the last 12 months were included in the study. Demographic varibales were collected from Mastercards for data analysis. A comparison group of patients currently not lost to follow up was created by using all of the patients who attended the HIV clinic between 18th-22nd July 2016 who had never defaulted from ART. Data was analysed using the chi squared (χ²) test, as data collected was categorical, with alpha levels set at 0.05. Results: Overall, 136 patients had defaulted from ART over the past 12 months at MMH. Of these, 43 patients had missing Mastercards, so 93 defaulter datasets were analysed. In the comparison group 93 datasets were also analysed and statistical analysis done using Chi-Squared testing. A higher proportion of men in the defaulting group was noted (χ²=0.034) and defaulters tended to be younger (χ²=0.052). 94.6% of patients who defaulted were taking Tenofovir, Lamivudine and Efavirenz, the standard first line ART therapy in Malawi. The mean length of time on ART was 39.0 months (RR: -22.4-100.4) in the defaulters group and 47.3 months (RR: -19.71-114.23) in the control group, with a mean difference of 8.3 less months in the defaulters group (χ ²=0.056). Discussion: The findings in this study echo the literature, however this review expands on that and shows the demographic for the patient at most risk of defaulting and being LTFU would be: a young male who has missed more than 4 doses of ART and is within his first year of treatment. For the hospital, this data is important at it identifies significant areas for public health focus. For instance, fear of disclosure and stigma may be disproportionately affecting younger men, so interventions can be aimed specifically at them to improve their health outcomes. The mean length of time on medication was 8.3 months less in the defaulters group, with a p-value of 0.056, emphasising the need for more intensive follow-up in the early stages of treatment, when patients are at the highest risk of defaulting.

Keywords: anti-retroviral therapy, ART, HIV, lost to follow up, Malawi

Procedia PDF Downloads 186
401 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach

Authors: Dominik Kowitzke

Abstract:

Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.

Keywords: empty nests, environment, Germany, households, over housing

Procedia PDF Downloads 171
400 The Potential Fresh Water Resources of Georgia and Sustainable Water Management

Authors: Nana Bolashvili, Vakhtang Geladze, Tamazi Karalashvili, Nino Machavariani, George Geladze, Davit Kartvelishvili, Ana Karalashvili

Abstract:

Fresh water is the major natural resource of Georgia. The average perennial sum of the rivers' runoff in Georgia is 52,77 km³, out of which 9,30 km³ inflows from abroad. The major volume of transit river runoff is ascribed to the Chorokhi river. Average perennial runoff in Western Georgia is 41,52 km³, in Eastern Georgia 11,25 km³. The indices of Eastern and Western Georgia were calculated with 50% and 90% river runoff respectively, while the same index calculation for other countries is based on a 50% river runoff. Out of total volume of resources, 133,2 m³/sec (4,21 km³) has been geologically prospected by the State Commission on Reserves and Acknowledged as reserves available for exploitation, 48% (2,02 km³) of which is in Western Georgia and 2,19 km³ in Eastern Georgia. Considering acknowledged water reserves of all categories per capita water resources accounts to 2,2 m³/day, whereas high industrial category -0. 88 m³ /day fresh drinking water. According to accepted norms, the possibility of using underground water reserves is 2,5 times higher than the long-term requirements of the country. The volume of abundant fresh-water reserves in Georgia is about 150 m³/sec (4,74 km³). Water in Georgia is consumed mostly in agriculture for irrigation purposes. It makes 66,4% around Georgia, in Eastern Georgia 72,4% and 38% in Western Georgia. According to the long-term forecast provision of population and the territory with water resources in Eastern Georgia will be quite normal. A bit different is the situation in the lower reaches of the Khrami and Iori rivers which could be easily overcome by corresponding financing. The present day irrigation system in Georgia does not meet the modern technical requirements. The overall efficiency of their majority varies between 0,4-0,6. Similar is the situation in the fresh water and public service water consumption. Organization of the mentioned systems, installation of water meters, introduction of new methods of irrigation without water loss will substantially increase efficiency of water use. Besides new irrigation norms developed from agro-climatic, geographical and hydrological angle will significantly reduce water waste. Taking all this into account we assume that for irrigation agricultural lands in Georgia is necessary 6,0 km³ water, 5,5 km³ of which goes to Eastern Georgia on irrigation arable areas. To increase water supply in Eastern Georgian territory and its population is possible by means of new water reservoirs as the runoff of every river considerably exceeds the consumption volume. In conclusion, we should say that fresh water resources by which Georgia is that rich could be significant source for barter exchange and investment attraction. Certain volume of fresh water can be exported from Western Georgia quite trouble free, without bringing any damage to population and hydroecosystems. The precise volume of exported water per region/time and method/place of water consumption should be defined after the estimation of different hydroecosystems and detailed analyses of water balance of the corresponding territories.

Keywords: GIS, management, rivers, water resources

Procedia PDF Downloads 369
399 Multi-Dimensional (Quantatative and Qualatative) Longitudinal Research Methods for Biomedical Research of Post-COVID-19 (“Long Covid”) Symptoms

Authors: Steven G. Sclan

Abstract:

Background: Since December 2019, the world has been afflicted by the spread of the Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), which is responsible for the condition referred to as Covid-19. The illness has had a cataclysmic impact on the political, social, economic, and overall well-being of the population of the entire globe. While Covid-19 has had a substantial universal fatality impact, it may have an even greater effect on the socioeconomic, medical well-being, and healthcare planning for remaining societies. Significance: As these numbers illustrate, many more persons survive the infection than die from it, and many of those patients have noted ongoing, persistent symptoms after successfully enduring the acute phase of the illness. Recognition and understanding of these symptoms are crucial for developing and arranging efficacious models of care for all patients (whether or not having been hospitalized) surviving acute covid illness and plagued by post-acute symptoms. Furthermore, regarding Covid infection in children (< 18 y/o), although it may be that Covid “+” children are not major vectors of infective transmission, it now appears that many more children than initially thought are carrying the virus without accompanying obvious symptomatic expression. It seems reasonable to wonder whether viral effects occur in children – those children who are Covid “+” and now asymptomatic – and if, over time, they might also experience similar symptoms. An even more significant question is whether Covid “+” asymptomatic children might manifest increased multiple health problems as they grow – i.e., developmental complications (e.g., physical/medical, metabolic, neurobehavioral, etc.) – in comparison to children who had been consistently Covid “ - ” during the pandemic. Topics Addressed and Theoretical Importance: This review is important because of the description of both quantitative and qualitative methods for clinical and biomedical research. Topics reviewed will consider the importance of well-designed, comprehensive (i.e., quantitative and qualitative methods) longitudinal studies of Post Covid-19 symptoms in both adults and children. Also reviewed will be general characteristics of longitudinal studies and a presentation of a model for a proposed study. Also discussed will be the benefit of longitudinal studies for the development of efficacious interventions and for the establishment of cogent, practical, and efficacious community healthcare service planning for post-acute covid patients. Conclusion: Results of multi-dimensional, longitudinal studies will have important theoretical implications. These studies will help to improve our understanding of the pathophysiology of long COVID and will aid in the identification of potential targets for treatment. Such studies can also provide valuable insights into the long-term impact of COVID-19 on public health and socioeconomics.

Keywords: COVID-19, post-COVID-19, long COVID, longitudinal research, quantitative research, qualitative research

Procedia PDF Downloads 59
398 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations

Authors: Till Gramberg

Abstract:

In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.

Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering

Procedia PDF Downloads 82
397 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 204