Search results for: technology complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9120

Search results for: technology complexity

5880 Digitalisation of Onboarding: A Case Study to Investigate the Impact of Virtual Reality Technology on Employees Social Interactions and Information Seeking During Job-Onboarding

Authors: Ewenam Gbormittah

Abstract:

Because of the effects of the pandemic, companies are focusing on the future of work arrangements for their employees. This includes adapting to a remote or hybrid working model. It is important that employers provide those working remotely or in a hybrid mode a rewarding onboarding experience and opportunities for interaction. Although, Information & Communication Technologies (ICT) have transformed the ways organisations manage employees over the years, there is still a need for a platform where organisations can adjust their onboarding to suit the social and interactive aspects of their employees, to facilitate successful integration. This study aimed to explore this matter by investigating whether Virtual Reality (VR) technology contributes to new employees integration into the organisation during their job-onboarding (JOB) process. The research questions are as follows: (1) To what extent does VR have an impact on employees successful integration into the organisation, and (2) How does VR help elements of new employees Psychological Contract (PC) during the course of interactions. An exploratory case study approach, which consisted of a semi-structured interview was conducted on 20 employees, split from two different case organisations. The results of the data were analysed according to each case, and then a cross-case comparison was provided. The results have generated 8 themes, presenting in excess of 7 sub-themes for CS1 and presented 7 themes, in excess of 7 sub-themes for CS2. The cross-case analysis has revealed that VR does have the potential to support employees integration into the organisation. However, the effects were shown to be stronger for employees in CS2, compared to employees in CS1. The results highlight practical implications for onboarding psychology and strategic talent solutions within recruitment. Such strategy this research particularly outlines, involves providing insights on how to manage the PC of employees from the recruitment stage to creating successful employment relationships.

Keywords: job-onboarding, psychological contract, virtual reality, case study one, case study two

Procedia PDF Downloads 60
5879 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network

Authors: T. Lydon, A. McNabola, P. Coughlan

Abstract:

Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.

Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network

Procedia PDF Downloads 256
5878 Analysis of the Learning Effectiveness of the Steam-6e Course: A Case Study on the Development of Virtual Idol Product Design as an Example

Authors: Mei-Chun. Chang

Abstract:

STEAM (Science, Technology, Engineering, Art, and Mathematics) represents a cross-disciplinary and learner-centered teaching model that cultivates students to link theory with the presentation of real situations, thereby improving their various abilities. This study explores students' learning performance after using the 6E model in STEAM teaching for a professional course in the digital media design department of technical colleges, as well as the difficulties and countermeasures faced by STEAM curriculum design and its implementation. In this study, through industry experts’ work experience, activity exchanges, course teaching, and experience, learners can think about the design and development value of virtual idol products that meet the needs of users and to employ AR/VR technology to innovate their product applications. Applying action research, the investigation has 35 junior students from the department of digital media design of the school where the researcher teaches as the research subjects. The teaching research was conducted over two stages spanning ten weeks and 30 sessions. This research collected the data and conducted quantitative and qualitative data sorting analyses through ‘design draft sheet’, ‘student interview record’, ‘STEAM Product Semantic Scale’, and ‘Creative Product Semantic Scale (CPSS)’. Research conclusions are presented, and relevant suggestions are proposed as a reference for teachers or follow-up researchers. The contribution of this study is to teach college students to develop original virtual idols and product designs, improve learning effectiveness through STEAM teaching activities, and effectively cultivate innovative and practical cross-disciplinary design talents.

Keywords: STEAM, 6E model, virtual idol, learning effectiveness, practical courses

Procedia PDF Downloads 124
5877 Examining Postcolonial Corporate Power Structures through the Lens of Development Induced Projects in Africa

Authors: Omogboyega Abe

Abstract:

This paper examines the relationships between socio-economic inequalities of power, race, wealth engendered by corporate structure, and domination in postcolonial Africa. The paper further considers how land as an epitome of property and power for the locals paved the way for capitalist accumulation and control in the hands of transnational corporations. European colonization of Africa was contingent on settler colonialism, where properties, including land, were re-modified as extractive resources for primitive accumulation. In developing Africa's extractive resources, transnational corporations (TNCs) usurped states' structures and domination over native land. The usurpation/corporate capture that exists to date has led to remonstrations and arguably a counter-productive approach to development projects. In some communities, the mention of extractive companies triggers resentment. The paradigm of state capture and state autonomy is simply inadequate to either describe or resolve the play of forces or actors responsible for severe corporate-induced human rights violations in emerging markets. Moreover, even if the deadly working conditions are conceived as some regulatory failure, it is tough to tell whose failure. The analysis in this paper is that the complexity and ambiguity evidenced by the multiple regimes and political and economic forces shaping production, consumption, and distribution of socio-economic variables are not exceptional in emerging markets. Instead, the varied experience in developing countries provides a window for seeing what we face in understanding and theorizing the structure and operation of the global economic and regulatory order in general.

Keywords: colonial, emerging markets, business, human rights, corporation

Procedia PDF Downloads 62
5876 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 106
5875 Unraveling the Complexity of Postpartum Distress: Examining the Influence of Alexithymia, Social Support, Partners' Support, and Birth Satisfaction on Postpartum Distress among Bulgarian Mothers

Authors: Stela Doncheva

Abstract:

Postpartum distress, encompassing depressive symptoms, obsessions, and anxiety, remains a subject of significant scientific interest due to its prevalence among individuals giving birth. This critical and transformative period presents a multitude of factors that impact women's health. On the one hand, variables such as social support, satisfaction in romantic relationships, shared newborn care, and birth satisfaction directly affect the mental well-being of new mothers. On the other hand, the interplay of hormonal changes, personality characteristics, emotional difficulties, and the profound life adjustments experienced by mothers can profoundly influence their self-esteem and overall physical and emotional well-being. This paper extensively explores the factors of alexithymia, social support, partners' support, and birth satisfaction to gain deeper insights into their impact on postpartum distress. Utilizing a qualitative survey consisting of six self-reflective questionnaires, this study collects valuable data regarding the individual postpartum experiences of Bulgarian mothers. The primary objective is to enrich our understanding of the complex factors involved in the development of postpartum distress during this crucial period. The results shed light on the intricate nature of the problem and highlight the significant influence of bio-psycho-social elements. By contributing to the existing knowledge in the field, this research provides valuable implications for the development of interventions and support systems tailored to the unique needs of mothers in the postpartum period. Ultimately, this study aims to improve the overall well-being of new mothers and promote optimal maternal health during the postpartum journey.

Keywords: maternal mental health, postpartum distress, postpartum depression, postnatal mothers

Procedia PDF Downloads 60
5874 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning

Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova

Abstract:

While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.

Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts

Procedia PDF Downloads 163
5873 Integrated Gas Turbine Performance Diagnostics and Condition Monitoring Using Adaptive GPA

Authors: Yi-Guang Li, Suresh Sampath

Abstract:

Gas turbine performance degrades over time, and the degradation is greatly affected by environmental, ambient, and operating conditions. The engines may degrade slowly under favorable conditions and result in a waste of engine life if a scheduled maintenance scheme is followed. They may also degrade fast and fail before a scheduled overhaul if the conditions are unfavorable, resulting in serious secondary damage, loss of engine availability, and increased maintenance costs. To overcome these problems, gas turbine owners are gradually moving from scheduled maintenance to condition-based maintenance, where condition monitoring is one of the key supporting technologies. This paper presents an integrated adaptive GPA diagnostics and performance monitoring system developed at Cranfield University for gas turbine gas path condition monitoring. It has the capability to predict the performance degradation of major gas path components of gas turbine engines, such as compressors, combustors, and turbines, using gas path measurement data. It is also able to predict engine key performance parameters for condition monitoring, such as turbine entry temperature that cannot be directly measured. The developed technology has been implemented into digital twin computer Software, Pythia, to support the condition monitoring of gas turbine engines. The capabilities of the integrated GPA condition monitoring system are demonstrated in three test cases using a model gas turbine engine similar to the GE aero-derivative LM2500 engine widely used in power generation and marine propulsion. It shows that when the compressor of the model engine degrades, the Adaptive GPA is able to predict the degradation and the changing engine performance accurately using gas path measurements. Such a presented technology and software are generic, can be applied to different types of gas turbine engines, and provide crucial engine health and performance parameters to support condition monitoring and condition-based maintenance.

Keywords: gas turbine, adaptive GPA, performance, diagnostics, condition monitoring

Procedia PDF Downloads 85
5872 Alignment and Antagonism in Flux: A Diachronic Sentiment Analysis of Attitudes towards the Chinese Mainland in the Hong Kong Press

Authors: William Feng, Qingyu Gao

Abstract:

Despite the extensive discussions about Hong Kong’s sentiments towards the Chinese Mainland since the sovereignty transfer in 1997, there has been no large-scale empirical analysis of the changing attitudes in the mainstream media, which both reflect and shape sentiments in the society. To address this gap, the present study uses an optimised semantic-based automatic sentiment analysis method to examine a corpus of news about China from 1997 to 2020 in three main Chinese-language newspapers in Hong Kong, namely Apple Daily, Ming Pao, and Oriental Daily News. The analysis shows that although the Hong Kong press had a positive emotional tone toward China in general, the overall trend of sentiment was becoming increasingly negative. Meanwhile, the alignment and antagonism toward China have both increased, providing empirical evidence of attitudinal polarisation in the Hong Kong society. Specifically, Apple Daily’s depictions of China have become increasingly negative, though with some positive turns before 2008, whilst Oriental Daily News has consistently expressed more favourable sentiments. Ming Pao maintained an impartial stance toward China through an increased but balanced representation of positive and negative sentiments, with its subjectivity and sentiment intensity growing to an industry-standard level. The results provide new insights into the complexity of sentiments towards China in the Hong Kong press and media attitudes in general in terms of the “us” and “them” positioning by explicating the cross-newspaper and cross-period variations using an enhanced sentiment analysis method which incorporates sentiment-oriented and semantic role analysis techniques.

Keywords: media attitude, sentiment analysis, Hong Kong press, one country two systems

Procedia PDF Downloads 112
5871 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model

Authors: Didier Auroux, Vladimir Groza

Abstract:

This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.

Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization

Procedia PDF Downloads 314
5870 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima

Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez

Abstract:

Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.

Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis

Procedia PDF Downloads 320
5869 Teacher Training in Saudi Arabia: A Blend of Old and New

Authors: Ivan Kuzio

Abstract:

The GIZ/TTC project is the first of its kind in the Middle East, which allows the development of a teaching training programme to degree level based on modern methodologies. The graduates from this college are part of the Saudization programme and will, over the next four years be part of and eventually run the new Colleges of Excellence. The new Colleges of Excellence are being developed to create a local vocationally trained workforce and will run initially alongside the current Colleges of Technology.

Keywords: blended learning, pedagogy, training, key competencies, social skills, cognitive development

Procedia PDF Downloads 306
5868 Competency Model as a Key Tool for Managing People in Organizations: Presentation of a Model

Authors: Andrea ČopíKová

Abstract:

Competency Based Management is a new approach to management, which solves organization’s challenges with complexity and with the aim to find and solve organization’s problems and learn how to avoid these in future. They teach the organizations to create, apart from the state of stability – that is temporary, vital organization, which is permanently able to utilize and profit from internal and external opportunities. The aim of this paper is to propose a process of competency model design, based on which a competency model for a financial department manager in a production company will be created. Competency models are very useful tool in many personnel processes in any organization. They are used for acquiring and selection of employees, designing training and development activities, employees’ evaluation, and they can be used as a guide for a career planning and as a tool for succession planning especially for managerial positions. When creating a competency model the method AHP (Analytic Hierarchy Process) and quantitative pair-wise comparison (Saaty’s method) will be used; these methods belong among the most used methods for the determination of weights, and it is used in the AHP procedure. The introduction part of the paper consists of the research results pertaining to the use of competency model in practice and then the issue of competency and competency models is explained. The application part describes in detail proposed methodology for the creation of competency models, based on which the competency model for the position of financial department manager in a foreign manufacturing company, will be created. In the conclusion of the paper, the final competency model will be shown for above mentioned position. The competency model divides selected competencies into three groups that are managerial, interpersonal and functional. The model describes in detail individual levels of competencies, their target value (required level) and the level of importance.

Keywords: analytic hierarchy process, competency, competency model, quantitative pairwise comparison

Procedia PDF Downloads 237
5867 Water Quality Calculation and Management System

Authors: H. M. B. N Jayasinghe

Abstract:

The water is found almost everywhere on Earth. Water resources contain a lot of pollution. Some diseases can be spread through the water to the living beings. So to be clean water it should undergo a number of treatments necessary to make it drinkable. So it is must to have purification technology for the wastewater. So the waste water treatment plants act a major role in these issues. When considering the procedures taken after the water treatment process was always based on manual calculations and recordings. Water purification plants may interact with lots of manual processes. It means the process taking much time consuming. So the final evaluation and chemical, biological treatment process get delayed. So to prevent those types of drawbacks there are some computerized programmable calculation and analytical techniques going to be introduced to the laboratory staff. To solve this problem automated system will be a solution in which guarantees the rational selection. A decision support system is a way to model data and make quality decisions based upon it. It is widely used in the world for the various kind of process automation. Decision support systems that just collect data and organize it effectively are usually called passive models where they do not suggest a specific decision but only reveal information. This web base system is based on global positioning data adding facility with map location. Most worth feature is SMS and E-mail alert service to inform the appropriate person on a critical issue. The technological influence to the system is HTML, MySQL, PHP, and some other web developing technologies. Current issues in the computerized water chemistry analysis are not much deep in progress. For an example the swimming pool water quality calculator. The validity of the system has been verified by test running and comparison with an existing plant data. Automated system will make the life easier in productively and qualitatively.

Keywords: automated system, wastewater, purification technology, map location

Procedia PDF Downloads 244
5866 Feasibility of BioMass Power Generation in Punjab Province of Pakistan

Authors: Muhammad Ghaffar Doggar, Farah

Abstract:

The primary objective of this feasibility study is to conduct a techno-financial assessment for installation of biomass based power plant in Faisalabad division. The study involves identification of best site for power plant followed by an assessment of biomass resource potential in the area and propose power plant of suitable size. The study also entailed comprehensive supply chain analysis to determine biomass fuel pricing, transportation and storage. Further technical and financial analyses have been done for selection of appropriate technology for the power plant and its financial viability, respectively. The assessment of biomass resources and the subsequent technical analysis revealed that 20 MW biomass power plant could be implemented at one of the locations near Faisalabad city i.e. AARI Site, Near Chak Jhumra district Faisalabad, Punjab province. Three options for steam pressure; namely, 70 bar, 90 bar and 100 bar boilers have been considered. Using international experience and prices on power plant technology and local prices on locally available equipment, the study concludes biomass fuel price of around 50 US dollars (USD) per ton when delivered to power plant site. The electricity prices used for feasibility calculations were 0.13 USD per KWh for electricity from a locally financed project and 0.11 USD per KWh for internationally financed power plant. For local financing the most viable choice is the 70 bar solution and with international financing, the most feasible solution is using a 90 bar boiler. Between the two options, the internationally financed 90 bar boiler setup gives better financial results than the locally financed 70 bar boiler project. It has been concluded that 20 MW with 90 bar power plant and internationally financed would have an equity IRR of 23% and a payback period of 7 years. This will be a cheap option for installation of power plants.

Keywords: AARI, Ayub agriculture research institute, biomass - crops residue, KWh - electricity Units, MG - Muhammad Ghaffar

Procedia PDF Downloads 337
5865 Secure Texting Used in a Post-Acute Pediatric Skilled Nursing Inpatient Setting: A Multidisciplinary Care Team Driven Communication System with Alarm and Alert Notification Management

Authors: Bency Ann Massinello, Nancy Day, Janet Fellini

Abstract:

Background: The use of an appropriate mode of communication among the multidisciplinary care team members regarding coordination of care is an extremely complicated yet important patient safety initiative. Effective communication among the team members(nursing staff, medical staff, respiratory therapists, rehabilitation therapists, patient-family services team…) become essential to develop a culture of trust and collaboration to deliver the highest quality care to patients are their families. The inpatient post-acute pediatrics, where children and their caregivers come for continuity of care, is no exceptions to the increasing use of text messages as a means to communication among clinicians. One such platform is the Vocera Communications (Vocera Smart Mobile App called Vocera Edge) allows the teams to use the application and share sensitive patient information through an encrypted platform using IOS company provided shared and assigned mobile devices. Objective: This paper discusses the quality initiative of implementing the transition from Vocera Smartbage to Vocera Edge Mobile App, technology advantage, use case expansion, and lessons learned about a secure alternative modality that allows sending and receiving secure text messages in a pediatric post-acute setting using an IOS device. This implementation process included all direct care staff, ancillary teams, and administrative teams on the clinical units. Methods: Our institution launched this transition from voice prompted hands-free Vocera Smartbage to Vocera Edge mobile based app for secure care team texting using a big bang approach during the first PDSA cycle. The pre and post implementation data was gathered using a qualitative survey of about 500 multidisciplinary team members to determine the ease of use of the application and its efficiency in care coordination. The technology was further expanded in its use by implementing clinical alerts and alarms notification using middleware integration with patient monitoring (Masimo) and life safety (Nurse call) systems. Additional use of the smart mobile iPhone use include pushing out apps like Lexicomp and Up to Date to have it readily available for users for evident-based practice in medication and disease management. Results: Successful implementation of the communication system in a shared and assigned model with all of the multidisciplinary teams in our pediatric post-acute setting. In just a 3-monthperiod post implementation, we noticed a 14% increase from 7,993 messages in 6 days in December 2020 to 9,116messages in March 2021. This confirmed that all clinical and non-clinical teams were using this mode of communication for coordinating the care for their patients. System generated data analytics used in addition to the pre and post implementation staff survey for process evaluation. Conclusion: A secure texting option using a mobile device is a safe and efficient mode for care team communication and collaboration using technology in real time. This allows for the settings like post-acute pediatric care areas to be in line with the widespread use of mobile apps and technology in our mainstream healthcare.

Keywords: nursing informatics, mobile secure texting, multidisciplinary communication, pediatrics post acute care

Procedia PDF Downloads 194
5864 Furniture Embodied Carbon Calculator for Interior Design Projects

Authors: Javkhlan Nyamjav, Simona Fischer, Lauren Garner, Veronica McCracken

Abstract:

Current whole building life cycle assessments (LCA) primarily focus on structural and major architectural elements to measure building embodied carbon. Most of the interior finishes and fixtures are available on digital tools (such as Tally); however, furniture is still left unaccounted for. Due to its repeated refreshments and its complexity, furniture embodied carbon can accumulate over time, becoming comparable to structure and envelope numbers. This paper presents a method to calculate the Global Warming Potential (GWP) of furniture elements in commercial buildings. The calculator uses the quantity takeoff method with GWP averages gathered from environmental product declarations (EPD). The data was collected from EPD databases and furniture manufacturers from North America to Europe. A total of 48 GWP numbers were collected, with 16 GWP coming from alternative EPD. The finalized calculator shows the average GWP of typical commercial furniture and helps the decision-making process to reduce embodied carbon. The calculator was tested on MSR Design projects and showed furniture can account for more than half of the interior embodied carbon. The calculator highlights the importance of adding furniture to the overall conversation. However, the data collection process showed a) acquiring furniture EPD is not straightforward as other building materials; b) there are very limited furniture EPD, which can be explained from many perspectives, including the EPD price; c) the EPD themselves vary in terms of units, LCA scopes, and timeframes, which makes it hard to compare the products. Even though there are current limitations, the emerging focus on interior embodied carbon will create more demand for furniture EPD. It will allow manufacturers to represent all their efforts on reducing embodied carbon. In addition, the study concludes with recommendations on how designers can reduce furniture-embodied carbon through reuse and closed-loop systems.

Keywords: furniture, embodied carbon, calculator, tenant improvement, interior design

Procedia PDF Downloads 211
5863 Development, Testing, and Application of a Low-Cost Technology Sulphur Dioxide Monitor as a Tool for use in a Volcanic Emissions Monitoring Network

Authors: Viveka Jackson, Erouscilla Joseph, Denise Beckles, Thomas Christopher

Abstract:

Sulphur Dioxide (SO2) has been defined as a non-flammable, non-explosive, colourless gas, having a pungent, irritating odour, and is one of the main gases emitted from volcanoes. Sulphur dioxide has been recorded in concentrations hazardous to humans (0.25 – 0.5 ppm (~650 – 1300 μg/m3), downwind of many volcanoes and hence warrants constant air-quality monitoring around these sites. It has been linked to an increase in chronic respiratory disease attributed to long-term exposures and alteration in lung and other physiological functions attributed to short-term exposures. Sulphur Springs in Saint Lucia is a highly active geothermal area, located within the Soufrière Volcanic Centre, and is a park widely visited by tourists and locals. It is also a current source of continuous volcanic emissions via its many fumaroles and bubbling pools, warranting concern by residents and visitors to the park regarding the effects of exposure to these gases. In this study, we introduce a novel SO2 measurement system for the monitoring and quantification of ambient levels of airborne volcanic SO2 using low-cost technology. This work involves the extensive production of low-cost SO2 monitors/samplers, as well as field examination in tandem with standard commercial samplers (SO2 diffusion tubes). It also incorporates community involvement in the volcanic monitoring process as non-professional users of the instrument. We intend to present the preliminary monitoring results obtained from the low-cost samplers, to identify the areas in the Park exposed to high concentrations of ambient SO2, and to assess the feasibility of the instrument for non-professional use and application in volcanic settings

Keywords: ambient SO2, community-based monitoring, risk-reduction, sulphur springs, low-cost

Procedia PDF Downloads 462
5862 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc

Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,

Abstract:

Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.

Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc

Procedia PDF Downloads 105
5861 Metropolitan Governance in Statutory Plan Making Process

Authors: Vibhore Bakshi

Abstract:

This research paper is a step towards understanding the role of governance in the plan preparation process. It addresses the complexities of the peri-urban, historical constructions, politics and policies of sustainability, and legislative frameworks. The paper reflects on the Delhi NCT as one of the classical cases that have happened to witness different structural changes in the master plan around 1981, 2001, 2021, and Proposed Draft 2041. The Delhi Landsat imageries for 1989 and 2018 show an increase in the built-up areas around the periphery of NCT. The peri-urbanization has been a result of increasing in-migration to peri–urban areas of Delhi. The built-up extraction for years 1981, 1991, 2001, 2011, and 2018 highlights the growing peri-urbanization on scarce land therefore, it becomes equally important to research the history of the land and its legislative measures. It is interesting to understand the streaks of changes that have occurred in the land of Delhi in accordance with the different master plans and land legislative policies. The process of masterplan process in Delhi has experienced a lot of complexities in juxtaposition to other metropolitan regions of the world. The paper identifies the shortcomings in the current master planning process approach in regard to the stage of the planning process, traditional planning approach, and lagging ICT-based interventions. The metropolitan governance systems across the globe and India depict diversity in the organizational setup and varied dissemination of functions. It addresses the complexity of the peri-urban, historical constructions, politics and policies of sustainability, and legislative frameworks.

Keywords: governance, land provisions, built-up areas, in migration, built up extraction, master planning process, legislative policies, metropolitan governance systems

Procedia PDF Downloads 168
5860 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 86
5859 Investigation of the Effect of Lecturers' Attributes on Students' Interest in Learning Statistic Ghanaian Tertiary Institutions

Authors: Samuel Asiedu-Addo, Jonathan Annan, Yarhands Dissou Arthur

Abstract:

The study aims to explore the relational effect of lecturers’ personal attribute on student’s interest in statistics. In this study personal attributes of lecturers’ such as lecturer’s dynamism, communication strategies and rapport in the classroom as well as applied knowledge during lecture were examined. Here, exploratory research design was used to establish the effect of lecturer’s personal attributes on student’s interest. Data were analyzed by means of confirmatory factor analysis and structural equation modeling (SEM) using the SmartPLS 3 program. The study recruited 376 students from the faculty of technical and vocational education of the University of Education Winneba Kumasi campus, and Ghana Technology University College as well as Kwame Nkrumah University of science and Technology. The results revealed that personal attributes of an effective lecturer were lecturer’s dynamism, rapport, communication and applied knowledge contribute (52.9%) in explaining students interest in statistics. Our regression analysis and structural equation modeling confirm that lecturers personal attribute contribute effectively by predicting student’s interest of 52.9% and 53.7% respectively. The paper concludes that the total effect of a lecturer’s attribute on student’s interest is moderate and significant. While a lecturer’s communication and dynamism were found to contribute positively to students’ interest, they were insignificant in predicting students’ interest. We further showed that a lecturer’s personal attributes such as applied knowledge and rapport have positive and significant effect on tertiary student’s interest in statistic, whilst lecturers’ communication and dynamism do not significantly affect student interest in statistics; though positively related.

Keywords: student interest, effective teacher, personal attributes, regression and SEM

Procedia PDF Downloads 357
5858 Ankh Key Broadband Array Antenna for 5G Applications

Authors: Noha M. Rashad, W. Swelam, M. H. Abd ElAzeem

Abstract:

A simple design of array antenna is presented in this paper, supporting millimeter wave applications which can be used in short range wireless communications such as 5G applications. This design enhances the use of V-band, according to IEEE standards, as the antenna works in the 70 GHz band with bandwidth more than 11 GHz and peak gain more than 13 dBi. The design is simulated using different numerical techniques achieving a very good agreement.

Keywords: 5G technology, array antenna, microstrip, millimeter wave

Procedia PDF Downloads 302
5857 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem

Authors: Renata Kurpiewska-Korbut

Abstract:

Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.

Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine

Procedia PDF Downloads 90
5856 Parents-Children Communication in College

Authors: Yin-Chen Liu, Chih-Chun Wu, Mei-He Shih

Abstract:

In this technology society, using ICT(Information and communications technology) to contact each other is very common. Interpersonal ICT communication maintains social support. Therefore, the study investigated the ICT communication between undergraduates and their parents, and gender differences were also detected. The sample size was 1,209 undergraduates, including 624(51.6%) males, 584(48.3%) females, and 1 gender unidentified. In the sample, 91.8% of the sample used phones to contact their fathers and 93.8% of them use phones to contact their mothers. 78.5% and 87.6% of the sample utilized LINE to contact their fathers and mothers respectively. As for Facebook, only 13.4% and 16.5% of the sample would use to contact their fathers and mothers respectively. Aforementioned results implied that the undergraduates nowadays use phone and LINE to contact their parents more common than Facebook. According to results from Pearson correlations, the more undergraduates refused to add their fathers as their Facebook friends, the more they refused to add their mothers as Facebook friends. The possible reasons for it could be that to distinguish different social network such as family and friends. Another possible reason could be avoiding parents’ controlling. It could be why the kids prefer to use phone and LINE to Facebook when contacting their parents. Result from Pearson correlations showed that the more undergraduates actively contact their fathers, the more they actively contact their mothers. On the other hand, the more their fathers actively contact them, the more their mothers actively contact them. Based on the results, this study encourages both parents and undergraduates to contact each other, for any contact between any two family members is associated with contact between other two family members. Obviously, the contact between family members is bidirectional. Future research might want to investigate if this bidirectional contact is associated with the family relation. For gender differences, results from the independent t-tests showed that compared to sons, daughters actively contacted their parents more. Maybe it is because parents keep saying that it is dangerous out there for their daughters, so they build up the habit for their daughters to contact them more. Results from paired sample t-tests showed that the undergraduates agreed that talking to mother on the phone had more satisfaction, felt more intimacy and supported than fathers.

Keywords: family ICT communication, parent-child ICT communication, FACEBOOK and LINE, gender differences

Procedia PDF Downloads 202
5855 A Study on How Insider Fraud Impacts FinTechs

Authors: Claire Norman-Maillet

Abstract:

Insider fraud is a major financial crime threat whereby an employee defrauds (or attempts to defraud) their current, prospective, or past employer. ‘Employee’ covers anyone employed by the company, including Board members and part-time staff. Insider fraud can take many forms, including an employee working alone or in collusion with others. Insider fraud has been on the rise since the Coronavirus pandemic and shows no signs of slowing. The objective of the research is to better understand how FinTechs are impacted by insider fraud and, therefore, how to stop it. This research will make an original contribution to the financial crime field, given the timing of this research being intertwined with the cost-of-living crisis in the UK and the global Coronavirus pandemic. This research focuses on insider fraud within FinTechs specifically, as they are arguably a modern phenomenon in the financial institutions space and have cutting-edge technology at their disposal. To achieve the research objective, the researcher held semi-structured interviews with over 20 individuals who deal with insider fraud perpetration in a practitioner, recruitment, or advisory capacity. The interviews were subsequently transcribed and analysed thematically. Main findings in the research suggest that FinTechs are arguably in the best position to combat insider fraud, given their focus on using recent technologies, as this can be used to combat the threat. However, insider fraud has been ignored owing to the denial of accepting the possibility that colleagues would defraud their employer, as well as the idea that external fraud is the most important threat. The research concludes that, whilst the technology is understandably prioritised by FinTechs for providing an agreeable customer experience, insider fraud needs to be given a platform upon which to be recognised as a significant threat to any company. Moreover, insider fraud needs to be given the same level of weighting and attention by Executive Committees and Boards as the customer experience.

Keywords: insider fraud, occupational fraud, COVID-19, COVID, Coronavirus, pandemic, internal fraud, financial crime, economic crime

Procedia PDF Downloads 56
5854 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finite-elements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: residual welding deformations, longitudinal and transverse shortenings of welding joints, method of analytic dependences, finite elements method

Procedia PDF Downloads 406
5853 Low-Voltage and Low-Power Bulk-Driven Continuous-Time Current-Mode Differentiator Filters

Authors: Ravi Kiran Jaladi, Ezz I. El-Masry

Abstract:

Emerging technologies such as ultra-wide band wireless access technology that operate at ultra-low power present several challenges due to their inherent design that limits the use of voltage-mode filters. Therefore, Continuous-time current-mode (CTCM) filters have become very popular in recent times due to the fact they have a wider dynamic range, improved linearity, and extended bandwidth compared to their voltage-mode counterparts. The goal of this research is to develop analog filters which are suitable for the current scaling CMOS technologies. Bulk-driven MOSFET is one of the most popular low power design technique for the existing challenges, while other techniques have obvious shortcomings. In this work, a CTCM Gate-driven (GD) differentiator has been presented with a frequency range from dc to 100MHz which operates at very low supply voltage of 0.7 volts. A novel CTCM Bulk-driven (BD) differentiator has been designed for the first time which reduces the power consumption multiple times that of GD differentiator. These GD and BD differentiator has been simulated using CADENCE TSMC 65nm technology for all the bilinear and biquadratic band-pass frequency responses. These basic building blocks can be used to implement the higher order filters. A 6th order cascade CTCM Chebyshev band-pass filter has been designed using the GD and BD techniques. As a conclusion, a low power GD and BD 6th order chebyshev stagger-tuned band-pass filter was simulated and all the parameters obtained from all the resulting realizations are analyzed and compared. Monte Carlo analysis is performed for both the 6th order filters and the results of sensitivity analysis are presented.

Keywords: bulk-driven (BD), continuous-time current-mode filters (CTCM), gate-driven (GD)

Procedia PDF Downloads 257
5852 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 176
5851 Training Undergraduate Engineering Students in Robotics and Automation through Model-Based Design Training: A Case Study at Assumption University of Thailand

Authors: Sajed A. Habib

Abstract:

Problem-based learning (PBL) is a student-centered pedagogy that originated in the medical field and has also been used extensively in other knowledge disciplines with recognized advantages and limitations. PBL has been used in various undergraduate engineering programs with mixed outcomes. The current fourth industrial revolution (digital era or Industry 4.0) has made it essential for many science and engineering students to receive effective training in advanced courses such as industrial automation and robotics. This paper presents a case study at Assumption University of Thailand, where a PBL-like approach was used to teach some aspects of automation and robotics to selected groups of undergraduate engineering students. These students were given some basic level training in automation prior to participating in a subsequent training session in order to solve technical problems with increased complexity. The participating students’ evaluation of the training sessions in terms of learning effectiveness, skills enhancement, and incremental knowledge following the problem-solving session was captured through a follow-up survey consisting of 14 questions and a 5-point scoring system. From the most recent training event, an overall 70% of the respondents indicated that their skill levels were enhanced to a much greater level than they had had before the training, whereas 60.4% of the respondents from the same event indicated that their incremental knowledge following the session was much greater than what they had prior to the training. The instructor-facilitator involved in the training events suggested that this method of learning was more suitable for senior/advanced level students than those at the freshmen level as certain skills to effectively participate in such problem-solving sessions are acquired over a period of time, and not instantly.

Keywords: automation, industry 4.0, model-based design training, problem-based learning

Procedia PDF Downloads 130