Search results for: statistical machine translation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7110

Search results for: statistical machine translation

330 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 65
329 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients

Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado

Abstract:

Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.

Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off

Procedia PDF Downloads 191
328 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles

Authors: K. Verma, v. S. Moholkar

Abstract:

In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.

Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation

Procedia PDF Downloads 67
327 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135
326 Association of Zinc with New Generation Cardiovascular Risk Markers in Childhood Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Zinc is a vital element required for growth and development. This fact makes zinc important, particularly for children. It maintains normal cellular structure and functions. This essential element appears to have protective effects against coronary artery disease and cardiomyopathy. Higher serum zinc levels are associated with lower risk of cardiovascular diseases (CVDs). There is a significant association between low serum zinc levels and heart failure. Zinc may be a potential biomarker of cardiovascular health. High sensitive cardiac troponin T (hs-cTnT) and cardiac myosin binding protein C (cMyBP-C) are new generation markers used for prediagnosis, diagnosis, and prognosis of CVDs. The aim of this study is to determine zinc as well as new generation cardiac markers profiles in children with normal body mass index (N-BMI), obese (OB), morbid obese (MO) children, and children with metabolic syndrome (MetS) findings. The association among them will also be investigated. Four study groups were constituted. The study protocol was approved by the institutional Ethics Committee of Tekirdag Namik Kemal University. Parents of the participants filled informed consent forms to participate in the study. Group 1 is composed of 44 children with N-BMI. Group 2 and Group 3 comprised 43 OB and 45 MO children, respectively. Forty-five MO children with MetS findings were included in Group 4. World Health Organization age- and sex-adjusted BMI percentile tables were used to constitute groups. These values were 15-85, 95-99, and above 99 for N-BMI, OB, and MO, respectively. Criteria for MetS findings were determined. Routine biochemical analyses, including zinc, were performed. High sensitive-cTnT and cMyBP-C concentrations were measured by kits based on enzyme-linked immunosorbent assay principle. Appropriate statistical tests within the scope of SPSS were used for the evaluation of the study data. p<0.05 was accepted as statistically significant. Four groups were matched for age and gender. Decreased zinc concentrations were measured in Groups 2, 3, and 4 compared to Group 1. Groups did not differ from one another in terms of hs-cTnT. There were statistically significant differences between cMyBP-C levels of MetS group and N-BMI as well as OB groups. There was an increasing trend going from N-BMI group to MetS group. There were statistically significant negative correlations between zinc and hs-cTnT as well as cMyBP-C concentrations in MetS group. In conclusion, inverse correlations detected between zinc and new generation cardiac markers (hs-TnT and cMyBP-C) have pointed out that decreased levels of this physiologically essential trace element accompany increased levels of hs-cTnT as well as cMyBP-C in children with MetS. This finding emphasizes that both zinc and these new generation cardiac markers may be evaluated as biomarkers of cardiovascular health during severe childhood obesity precipitated with MetS findings and also suggested as the messengers of the future risk in the adulthood periods of children with MetS.

Keywords: cardiac myosin binding protein-C, cardiovascular diseases, children, high sensitive cardiac troponin T, obesity

Procedia PDF Downloads 110
325 Developing Motorized Spectroscopy System for Tissue Scanning

Authors: Tuba Denkceken, Ayse Nur Sarı, Volkan Ihsan Tore, Mahmut Denkceken

Abstract:

The aim of the presented study was to develop a newly motorized spectroscopy system. Our system is composed of probe and motor parts. The probe part consists of bioimpedance and fiber optic components that include two platinum wires (each 25 micrometer in diameter) and two fiber cables (each 50 micrometers in diameter) respectively. Probe was examined on tissue phantom (polystyrene microspheres with different diameters). In the bioimpedance part of the probe current was transferred to the phantom and conductivity information was obtained. Adjacent two fiber cables were used in the fiber optic part of the system. Light was transferred to the phantom by fiber that was connected to the light source and backscattered light was collected with the other adjacent fiber for analysis. It is known that the nucleus expands and the nucleus-cytoplasm ratio increases during the cancer progression in the cell and this situation is one of the most important criteria for evaluating the tissue for pathologists. The sensitivity of the probe to particle (nucleus) size in phantom was tested during the study. Spectroscopic data obtained from our system on phantom was evaluated by multivariate statistical analysis. Thus the information about the particle size in the phantom was obtained. Bioimpedance and fiber optic experiments results which were obtained from polystyrene microspheres showed that the impedance value and the oscillation amplitude were increasing while the size of particle was enlarging. These results were compatible with the previous studies. In order to motorize the system within the motor part, three driver electronic circuits were designed primarily. In this part, supply capacitors were placed symmetrically near to the supply inputs which were used for balancing the oscillation. Female capacitors were connected to the control pin. Optic and mechanic switches were made. Drivers were structurally designed as they could command highly calibrated motors. It was considered important to keep the drivers’ dimension as small as we could (4.4x4.4x1.4 cm). Then three miniature step motors were connected to each other along with three drivers. Since spectroscopic techniques are quantitative methods, they yield more objective results than traditional ones. In the future part of this study, it is planning to get spectroscopic data that have optic and impedance information from the cell culture which is normal, low metastatic and high metastatic breast cancer. In case of getting high sensitivity in differentiated cells, it might be possible to scan large surface tissue areas in a short time with small steps. By means of motorize feature of the system, any region of the tissue will not be missed, in this manner we are going to be able to diagnose cancerous parts of the tissue meticulously. This work is supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK) through 3001 project (115E662).

Keywords: motorized spectroscopy, phantom, scanning system, tissue scanning

Procedia PDF Downloads 191
324 An Assessment of the Trend and Pattern of Vital Registration System in Shiroro Local Government Area of Niger State, Nigeria

Authors: Aliyu Bello Mohammed

Abstract:

Vital registration or registration of vital events is one of the three major sources of demographic data in Nigeria. The other two are the population census and sample survey. The former is judged to be an indispensable source of demographic data because, it provide information on vital statistics and population trends between two census periods. Various literacy works however depict the vital registration in Nigeria as incapable of providing accurate data for the country. The study has both theoretical and practical significances. The trends and pattern of vital registration has not received adequate research interest in Sub-Saharan Africa in general and Nigeria in particular. This has created a gap in understanding the extent and consequence of the scourge in Africa sub-region. Practically, the study also captures the policy interventions of government and Non-Governmental Organizations (NGOs) that would help enlighten the public on the importance of vital registration in Nigeria. Furthermore, feasible policy strategies that will enhance trends and pattern vital registration in the society would emanate from the study. The study adopted a cross sectional survey design and applied multi stage sampling techniques to sample 230 respondents from the general public in the study area. The first stage involved the splitting of the local government into wards. The second stage involves selecting streets, while the third stage was the households. In all, 6 wards were sampled for the study. The study utilized both primary and secondary sources of data. The primary sources of data used were the questionnaire, focus group discussion (FGD) and in-depth interview (IDI) guides while the secondary sources of data were journals and books, newspapers and magazines. Twelve FGD sessions with 96 study participants and five IDI sessions with the heads of vital registration facilities were conducted. The quantitative data were analyzed using Statistical Package for Social Sciences (SPSS). Descriptive statistics like tables, frequencies and percentages were employed in presenting and interpreting the data. Information from the qualitative data was transcribed and ordered in themes to ensure that outstanding points of the responses are noted. The following conclusions were drawn from the study: the available vital registration facilities are not adequate and were not evenly distributed in the study area; lack of awareness and knowledge of the existence and the importance of vital registration by majority of the people in the local government; distance to vital registration centres from their residents; most births in the area were not registered, and even among the few births that were registered, majority of them were registered after the limited period for registration. And the study reveals that socio-economic index, educational level and distance of facilities to residents are determinants of access to vital registration facility. The study concludes by discussing the need for a reliable and accurate vital registration system if Nigeria’s vision of becoming one of the top 20 economies in the world in 2020 would be realized.

Keywords: trends, patterns, vital, registration and assessment

Procedia PDF Downloads 253
323 Empowering Leaders: Strategies for Effective Management in a Changing World

Authors: Shahid Ali

Abstract:

Leadership and management are essential components of running successful organizations. Both concepts are closely related but serve different purposes in the overall management of a company. Leadership focuses on inspiring and motivating employees towards a common goal, while management involves coordinating and directing resources to achieve organizational objectives efficiently. Objectives of Leadership and Management: Inspiring and motivating employees: A key objective of leadership is to inspire and motivate employees to work towards achieving the organization’s goals. Effective leaders create a vision that employees can align with and provide the necessary motivation to drive performance. Setting goals and objectives: Both leadership and management play a crucial role in setting goals and objectives for the organization. Leaders create a vision for the future, while managers develop plans to achieve specific objectives within the given timeframe. Implementing strategies: Leaders come up with innovative strategies to drive the organization forward, while managers are responsible for implementing these strategies effectively. Together, leadership and management ensure that the organization’s plans are executed efficiently. Contributions of Leadership and Management: Employee Engagement: Effective leadership and management can increase employee engagement and satisfaction. When employees feel motivated and inspired by their leaders, they are more likely to be engaged in their work and contribute to the organization’s success. Organizational Success: Good leadership and management are essential for navigating the challenges and changes that organizations face. By setting clear goals, inspiring employees, and making strategic decisions, leaders and managers can drive organizational success. Talent Development: Leaders and managers are responsible for identifying and developing talent within the organization. By providing feedback, training, and coaching, they can help employees reach their full potential and contribute effectively to the organization. Research Type: The research on leadership and management is typically quantitative and qualitative in nature. Quantitative research involves the collection and analysis of numerical data to understand the impact of leadership and management practices on organizational outcomes. This type of research often uses surveys, questionnaires, and statistical analysis to measure variables such as employee satisfaction, performance, and organizational success. Qualitative research, on the other hand, involves exploring the subjective experiences and perspectives of individuals related to leadership and management. This type of research may include interviews, observations, and case studies to gain a deeper understanding of how leadership and management practices influence organizational behavior and outcomes. In conclusion, leadership and management play a critical role in the success of organizations. Through effective leadership and management practices, organizations can inspire and motivate employees, set goals, and implement strategies to achieve their objectives. Research on leadership and management helps to understand the impact of these practices on organizational outcomes and provides valuable insights for improving leadership and management practices in the future.

Keywords: empowering, leadership, management, adaptability

Procedia PDF Downloads 50
322 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry

Authors: Paulomi Polly Burey, Mark Lynch

Abstract:

It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.

Keywords: chemistry, food science, future pedagogy, STEM education

Procedia PDF Downloads 168
321 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers

Authors: Navah Z. Ratzon, Rachel Shichrur

Abstract:

Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.

Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention

Procedia PDF Downloads 346
320 Combining Patients Pain Scores Reports with Functionality Scales in Chronic Low Back Pain Patients

Authors: Ivana Knezevic, Kenneth D. Candido, N. Nick Knezevic

Abstract:

Background: While pain intensity scales remain generally accepted assessment tool, and the numeric pain rating score is highly subjective, we nevertheless rely on them to make a judgment about treatment effects. Misinterpretation of pain can lead practitioners to underestimate or overestimate the patient’s medical condition. The purpose of this study was to analyze how the numeric rating pain scores given by patients with low back pain correlate with their functional activity levels. Methods: We included 100 consecutive patients with radicular low back pain (LBP) after the Institutional Review Board (IRB) approval. Pain scores, numeric rating scale (NRS) responses at rest and in the movement,Oswestry Disability Index (ODI) questionnaire answers were collected 10 times through 12 months. The ODI questionnaire is targeting a patient’s activities and physical limitations as well as a patient’s ability to manage stationary everyday duties. Statistical analysis was performed by using SPSS Software version 20. Results: The average duration of LBP was 14±22 months at the beginning of the study. All patients included in the study were between 24 and 78 years old (average 48.85±14); 56% women and 44% men. Differences between ODI and pain scores in the range from -10% to +10% were considered “normal”. Discrepancies in pain scores were graded as mild between -30% and -11% or +11% and +30%; moderate between -50% and -31% and +31% and +50% and severe if differences were more than -50% or +50%. Our data showed that pain scores at rest correlate well with ODI in 65% of patients. In 30% of patients mild discrepancies were present (negative in 21% and positive in 9%), 4% of patients had moderate and 1% severe discrepancies. “Negative discrepancy” means that patients graded their pain scores much higher than their functional ability, and most likely exaggerated their pain. “Positive discrepancy” means that patients graded their pain scores much lower than their functional ability, and most likely underrated their pain. Comparisons between ODI and pain scores during movement showed normal correlation in only 39% of patients. Mild discrepancies were present in 42% (negative in 39% and positive in 3%); moderate in 14% (all negative), and severe in 5% (all negative) of patients. A 58% unknowingly exaggerated their pain during movement. Inconsistencies were equal in male and female patients (p=0.606 and p=0.928).Our results showed that there was a negative correlation between patients’ satisfaction and the degree of reporting pain inconsistency. Furthermore, patients talking opioids showed more discrepancies in reporting pain intensity scores than did patients taking non-opioid analgesics or not taking medications for LBP (p=0.038). There was a highly statistically significant correlation between morphine equivalents doses and the level of discrepancy (p<0.0001). Conclusion: We have put emphasis on the patient education in pain evaluation as a vital step in accurate pain level reporting. We have showed a direct correlation with patients’ satisfaction. Furthermore, we must identify other parameters in defining our patients’ chronic pain conditions, such as functionality scales, quality of life questionnaires, etc., and should move away from an overly simplistic subjective rating scale.

Keywords: pain score, functionality scales, low back pain, lumbar

Procedia PDF Downloads 234
319 Motives for Reshoring from China to Europe: A Hierarchical Classification of Companies

Authors: Fabienne Fel, Eric Griette

Abstract:

Reshoring, whether concerning back-reshoring or near-reshoring, is a quite recent phenomenon. Despite the economic and political interest of this topic, academic research questioning determinants of reshoring remains rare. Our paper aims at contributing to fill this gap. In order to better understand the reasons for reshoring, we conducted a study among 280 French firms during spring 2016, three-quarters of which sourced, or source, in China. 105 firms in the sample have reshored all or part of their Chinese production or supply in recent years, and we aimed to establish a typology of the motives that drove them to this decision. We asked our respondents about the history of their Chinese supplies, their current reshoring strategies, and their motivations. Statistical analysis was performed with SPSS 22 and SPAD 8. Our results show that change in commercial and financial terms with China is the first motive explaining the current reshoring movement from this country (it applies to 54% of our respondents). A change in corporate strategy is the second motive (30% of our respondents); the reshoring decision follows a change in companies’ strategies (upgrading, implementation of a CSR policy, or a 'lean management' strategy). The third motive (14% of our sample) is a mere correction of the initial offshoring decision, considered as a mistake (under-estimation of hidden costs, non-quality and non-responsiveness problems). Some authors emphasize that developing a short supply chain, involving geographic proximity between design and production, gives a competitive advantage to companies wishing to offer innovative products. Admittedly 40% of our respondents indicate that this motive could have played a part in their decision to reshore, but this reason was not enough for any of them and is not an intrinsic motive leading to leaving Chinese suppliers. Having questioned our respondents about the importance given to various problems leading them to reshore, we then performed a Principal Components Analysis (PCA), associated with an Ascending Hierarchical Classification (AHC), based on Ward criterion, so as to point out more specific motivations. Three main classes of companies should be distinguished: -The 'Cost Killers' (23% of the sample), which reshore their supplies from China only because of higher procurement costs and so as to find lower costs elsewhere. -The 'Realists' (50% of the sample), giving equal weight or importance to increasing procurement costs in China and to the quality of their supplies (to a large extend). Companies being part of this class tend to take advantage of this changing environment to change their procurement strategy, seeking suppliers offering better quality and responsiveness. - The 'Voluntarists' (26% of the sample), which choose to reshore their Chinese supplies regardless of higher Chinese costs, to obtain better quality and greater responsiveness. We emphasize that if the main driver for reshoring from China is indeed higher local costs, it is should not be regarded as an exclusive motivation; 77% of the companies in the sample, are also seeking, sometimes exclusively, more reactive suppliers, liable to quality, respect for the environment and intellectual property.

Keywords: China, procurement, reshoring, strategy, supplies

Procedia PDF Downloads 326
318 Moderate Electric Field and Ultrasound as Alternative Technologies to Raspberry Juice Pasteurization Process

Authors: Cibele F. Oliveira, Debora P. Jaeschke, Rodrigo R. Laurino, Amanda R. Andrade, Ligia D. F. Marczak

Abstract:

Raspberry is well-known as a good source of phenolic compounds, mainly anthocyanin. Some studies pointed out the importance of these bioactive compounds consumption, which is related to the decrease of the risk of cancer and cardiovascular diseases. The most consumed raspberry products are juices, yogurts, ice creams and jellies and, to ensure the safety of these products, raspberry is commonly pasteurized, for enzyme and microorganisms inactivation. Despite being efficient, the pasteurization process can lead to degradation reactions of the bioactive compounds, decreasing the products healthy benefits. Therefore, the aim of the present work was to evaluate moderate electric field (MEF) and ultrasound (US) technologies application on the pasteurization process of raspberry juice and compare the results with conventional pasteurization process. For this, phenolic compounds, anthocyanin content and physical-chemical parameters (pH, color changes, titratable acidity) of the juice were evaluated before and after the treatments. Moreover, microbiological analyses of aerobic mesophiles microorganisms, molds and yeast were performed in the samples before and after the treatments, to verify the potential of these technologies to inactivate microorganisms. All the pasteurization processes were performed in triplicate for 10 min, using a cylindrical Pyrex® vessel with a water jacket. The conventional pasteurization was performed at 90 °C using a hot water bath connected to the extraction cell. The US assisted pasteurization was performed using 423 and 508 W cm-2 (75 and 90 % of ultrasound intensity). It is important to mention that during US application the temperature was kept below 35 °C; for this, the water jacket of the extraction cell was connected to a water bath with cold water. MEF assisted pasteurization experiments were performed similarly to US experiments, using 25 and 50 V. Control experiments were performed at the maximum temperature of US and MEF experiments (35 °C) to evaluate only the effect of the aforementioned technologies on the pasteurization. The results showed that phenolic compounds concentration in the juice was not affected by US and MEF application. However, it was observed that the US assisted pasteurization, performed at the highest intensity, decreased anthocyanin content in 33 % (compared to in natura juice). This result was possibly due to the cavitation phenomena, which can lead to free radicals formation and accumulation on the medium; these radicals can react with anthocyanin decreasing the content of these antioxidant compounds in the juice. Physical-chemical parameters did not present statistical differences for samples before and after the treatments. Microbiological analyses results showed that all the pasteurization treatments decreased the microorganism content in two logarithmic cycles. However, as values were lower than 1000 CFU mL-1 it was not possible to verify the efficacy of each treatment. Thus, MEF and US were considered as potential alternative technologies for pasteurization process, once in the right conditions the application of the technologies decreased microorganism content in the juice and did not affected phenolic and anthocyanin content, as well as physical-chemical parameters. However, more studies are needed regarding the influence of MEF and US processes on microorganisms’ inactivation.

Keywords: MEF, microorganism inactivation, anthocyanin, phenolic compounds

Procedia PDF Downloads 242
317 Safety and Maternal Anxiety in Mother's and Baby's Sleep: Cross-sectional Study

Authors: Rayanne Branco Dos Santos Lima, Lorena Pinheiro Barbosa, Kamila Ferreira Lima, Victor Manuel Tegoma Ruiz, Monyka Brito Lima Dos Santos, Maria Wendiane Gueiros Gaspar, Luzia Camila Coelho Ferreira, Leandro Cardozo Dos Santos Brito, Deyse Maria Alves Rocha

Abstract:

Introduction: The lack of regulation of the baby's sleep-wake pattern in the first years of life affects the health of thousands of women. Maternal sleep deprivation can trigger or aggravate psychosomatic problems such as depression, anxiety and stress that can directly influence maternal safety, with consequences for the baby's and mother's sleep. Such conditions can affect the family's quality of life and child development. Objective: To correlate maternal security with maternal state anxiety scores and the mother's and baby's total sleep time. Method: Cross-sectional study carried out with 96 mothers of babies aged 10 to 24 months, accompanied by nursing professionals linked to a Federal University in Northeast Brazil. Study variables were maternal security, maternal state anxiety scores, infant latency and sleep time, and total nocturnal sleep time of mother and infant. Maternal safety was calculated using a four-point Likert scale (1=not at all safe, 2=somewhat safe, 3=very safe, 4=completely safe). Maternal anxiety was measured by State-Trait Anxiety Inventory, state-anxiety subscale whose scores vary from 20 to 80 points, and the higher the score, the higher the anxiety levels. Scores below 33 are considered mild; from 33 to 49, moderate and above 49, high. As for the total nocturnal sleep time, values between 7-9 hours of sleep were considered adequate for mothers, and values between 9-12 hours for the baby, according to the guidelines of the National Sleep Foundation. For the sleep latency time, a time equal to or less than 20 min was considered adequate. It is noteworthy that the latency time and the time of night sleep of the mother and the baby were obtained by the mother's subjective report. To correlate the data, Spearman's correlation was used in the statistical package R version 3.6.3. Results: 96 women and babies participated, aged 22 to 38 years (mean 30.8) and 10 to 24 months (mean 14.7), respectively. The average of maternal security was 2.89 (unsafe); Mean maternal state anxiety scores were 43.75 (moderate anxiety). The babies' average sleep latency time was 39.6 min (>20 min). The mean sleep times of the mother and baby were, respectively, 6h and 42min and 8h and 19min, both less than the recommended nocturnal sleep time. Maternal security was positively correlated with maternal state anxiety scores (rh=266, p=0.009) and negatively correlated with infant sleep latency (rh= -0.30. P=0.003). Baby sleep time was positively correlated with maternal sleep time. (rh 0.46, p<0.001). Conclusion: The more secure the mothers considered themselves, the higher the anxiety scores and the shorter the baby's sleep latency. Also, the longer the baby sleeps, the longer the mother sleeps. Thus, interventions are needed to promote the quality and efficiency of sleep for both mother and baby.

Keywords: sleep, anxiety, infant, mother-child relations

Procedia PDF Downloads 102
316 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas

Authors: Michel Soto Chalhoub

Abstract:

Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.

Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra

Procedia PDF Downloads 232
315 Effect of Timing and Contributing Factors for Early Language Intervention in Toddlers with Repaired Cleft Lip and Palate

Authors: Pushpavathi M., Kavya V., Akshatha V.

Abstract:

Introduction: Cleft lip and palate (CLP) is a congenital condition which hinders effectual communication due to associated speech and language difficulties. Expressive language delay (ELD) is a feature seen in this population which is influenced by factors such as type and severity of CLP, age at surgical and linguistic intervention and also the type and intensity of speech and language therapy (SLT). Since CLP is the most common congenital abnormality seen in Indian children, early intervention is a necessity which plays a critical role in enhancing their speech and language skills. The interaction between the timing of intervention and factors which contribute to effective intervention by caregivers is an area which needs to be explored. Objectives: The present study attempts to determine the effect of timing of intervention on the contributing maternal factors for effective linguistic intervention in toddlers with repaired CLP with respect to the awareness, home training patterns, speech and non-speech behaviors of the mothers. Participants: Thirty six toddlers in the age range of 1 to 4 years diagnosed as ELD secondary to repaired CLP, along with their mothers served as participants. Group I (Early Intervention Group, EIG) included 19 mother-child pairs who came to seek SLT soon after corrective surgery and group II (Delayed Intervention Group, DIG) included 16 mother-child pairs who received SLT after the age of 3 years. Further, the groups were divided into group A, and group B. Group ‘A’ received SLT for 60 sessions by Speech Language Pathologist (SLP), while Group B received SLT for 30 sessions by SLP and 30 sessions only by mother without supervision of SLP. Method: The mothers were enrolled for the Early Language Intervention Program and following this, their awareness about CLP was assessed through the Parental awareness questionnaire. The quality of home training was assessed through Mohite’s Inventory. Subsequently, the speech and non-speech behaviors of the mothers were assessed using a Mother’s behavioral checklist. Detailed counseling and orientation was done to the mothers, and SLT was initiated for toddlers. After 60 sessions of intensive SLT, the questionnaire and checklists were re-administered to find out the changes in scores between the pre- and posttest measurements. Results: The scores obtained under different domains in the awareness questionnaire, Mohite’s inventory and Mothers behavior checklist were tabulated and subjected to statistical analysis. Since the data did not follow normal distribution (i.e. p > 0.05), Mann-Whitney U test was conducted which revealed that there was no significant difference between groups I and II as well as groups A and B. Further, Wilcoxon Signed Rank test revealed that mothers had better awareness regarding issues related to CLP and improved home-training abilities post-orientation (p ≤ 0.05). A statistically significant difference was also noted for speech and non-speech behaviors of the mothers (p ≤ 0.05). Conclusions: Extensive orientation and counseling helped mothers of both EI and DI groups to improve their knowledge about CLP. Intensive SLT using focused stimulation and a parent-implemented approach enabled them to carry out the intervention in an effectual manner.

Keywords: awareness, cleft lip and palate, early language intervention program, home training, orientation, timing of intervention

Procedia PDF Downloads 122
314 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 183
313 Unmasking Virtual Empathy: A Philosophical Examination of AI-Mediated Emotional Practices in Healthcare

Authors: Eliana Bergamin

Abstract:

This philosophical inquiry, influenced by the seminal works of Annemarie Mol and Jeannette Pols, critically examines the transformative impact of artificial intelligence (AI) on emotional caregiving practices within virtual healthcare. Rooted in the traditions of philosophy of care, philosophy of emotions, and applied philosophy, this study seeks to unravel nuanced shifts in the moral and emotional fabric of healthcare mediated by AI-powered technologies. Departing from traditional empirical studies, the approach embraces the foundational principles of care ethics and phenomenology, offering a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. At its core, this research addresses the introduction of AI-powered technologies mediating emotional and care practices in the healthcare sector. By drawing on Mol and Pols' insights, the study offers a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. Anchored in ethnographic research within a pioneering private healthcare company in the Netherlands, this critical philosophical inquiry provides a unique lens into the dynamics of AI-mediated emotional practices. The study employs in-depth, semi-structured interviews with virtual caregivers and care receivers alongside ongoing ethnographic observations spanning approximately two and a half months. Delving into the lived experiences of those at the forefront of this technological evolution, the research aims to unravel subtle shifts in the emotional and moral landscape of healthcare, critically examining the implications of AI in reshaping the philosophy of care and human connection in virtual healthcare. Inspired by Mol and Pols' relational approach, the study prioritizes the lived experiences of individuals within the virtual healthcare landscape, offering a deeper understanding of the intertwining of technology, emotions, and the philosophy of care. In the realm of philosophy of care, the research elucidates how virtual tools, particularly those driven by AI, mediate emotions such as empathy, sympathy, and compassion—the bedrock of caregiving. Focusing on emotional nuances, the study contributes to the broader discourse on the ethics of care in the context of technological mediation. In the philosophy of emotions, the investigation examines how the introduction of AI alters the phenomenology of emotional experiences in caregiving. Exploring the interplay between human emotions and machine-mediated interactions, the nuanced analysis discerns implications for both caregivers and caretakers, contributing to the evolving understanding of emotional practices in a technologically mediated healthcare environment. Within applied philosophy, the study transcends empirical observations, positioning itself as a reflective exploration of the moral implications of AI in healthcare. The findings are intended to inform ethical considerations and policy formulations, bridging the gap between technological advancements and the enduring values of caregiving. In conclusion, this focused philosophical inquiry aims to provide a foundational understanding of the evolving landscape of virtual healthcare, drawing on the works of Mol and Pols to illuminate the essence of human connection, care, and empathy amid technological advancements.

Keywords: applied philosophy, artificial intelligence, healthcare, philosophy of care, philosophy of emotions

Procedia PDF Downloads 58
312 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic

Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak

Abstract:

Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.

Keywords: business value, financial ratios, performance measurement, value drivers

Procedia PDF Downloads 222
311 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability

Authors: Kim Anema, Matthias Max, Chris Zevenbergen

Abstract:

For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.

Keywords: community resilience, disaster response, social consequences, preparedness

Procedia PDF Downloads 352
310 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 185
309 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus

Authors: Mahul Bhattacharyya, Niladri Sekhar Dash

Abstract:

The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.

Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation

Procedia PDF Downloads 174
308 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 39
307 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 119
306 Imaging Spectrum of Central Nervous System Tuberculosis on Magnetic Resonance Imaging: Correlation with Clinical and Microbiological Results

Authors: Vasundhara Arora, Anupam Jhobta, Suresh Thakur, Sanjiv Sharma

Abstract:

Aims and Objectives: Intracranial tuberculosis (TB) is one of the most devastating manifestations of TB and a challenging public health issue of considerable importance and magnitude world over. This study elaborates on the imaging spectrum of neurotuberculosis on magnetic resonance imaging (MRI) in 29 clinically suspected cases from a tertiary care hospital. Materials and Methods: The prospective hospital based evaluation of MR imaging features of neuro-tuberculosis in 29 clinically suspected cases was carried out in Department of Radio-diagnosis, Indira Gandhi Medical Hospital from July 2017 to August 2018. MR Images were obtained on a 1.5 T Magnetom Avanto machine and were analyzed to identify any abnormal meningeal enhancement or parenchymal lesions. Microbiological and Biochemical CSF analysis was performed in radio-logically suspected cases and the results were compared with the imaging data. Clinical follow up of the patients started on anti-tuberculous treatment was done to evaluate the response to treatment and clinical outcome. Results: Age range of patients in the study was between 1 year to 73 years. The mean age of presentation was 11.5 years. No significant difference in the distribution of cerebral tuberculosis was noted among the two genders. Imaging findings of neuro-tuberculosis obtained were varied and non specific ranging from lepto-meningeal enhancement, cerebritis to space occupying lesions such as tuberculomas and tubercular abscesses. Complications presenting as hydrocephalus (n= 7) and infarcts (n=9) was noted in few of these patients. 29 patients showed radiological suspicion of CNS tuberculosis with meningitis alone observed in 11 cases, tuberculomas alone were observed in 4 cases, meningitis with parenchymal tuberculomas in 11 cases. Tubercular abscess and cerebritis were observed in one case each. Tuberculous arachnoiditis was noted in one patient. Gene expert positivity was obtained in 11 out of 29 radiologically suspected patients; none of the patients showed culture positivity. Meningeal form of the disease alone showed higher positivity rate of gene Xpert (n=5) followed by combination of meningeal and parenchymal forms of disease (n=4). The parenchymal manifestation of disease alone showed least positivity rates (n= 3) with gene xpert testing. All 29 patients were started on anti tubercular treatment based on radiological suspicion of the disease with clinical improvement observed in 27 treated patients. Conclusions: In our study, higher incidence of neuro- tuberculosis was noted in paediatric population with predominance of the meningeal form of the disease. Gene Xpert positivity obtained was low due to paucibacillary nature of cerebrospinal fluid (CSF) with even lower positivity of CSF samples in parenchymal form of the manifestation. MRI showed high accuracy in detecting CNS lesions in neuro-tuberculosis. Hence, it can be concluded that MRI plays a crucial role in the diagnosis because of its inherent sensitivity and specificity and is an indispensible imaging modality. It caters to the need of early diagnosis owing to poor sensitivity of microbiological tests more so in the parenchymal manifestation of the disease.

Keywords: neurotuberculosis, tubercular abscess, tuberculoma, tuberculous meningitis

Procedia PDF Downloads 169
305 Greener Minds: Understanding Students' Perceptions of Environmental Sustainability in Higher Education, Sultan Qaboos University

Authors: Aisha Alshdefat, Lina Shakman

Abstract:

Objective: With environmental sustainability (ES) emerging as a critical concern due to its global impact, higher education institutions play a vital role in promoting ES through curricula and campus operations. This study examines the perceptions, attitudes, and behaviors related to ES among students at Sultan Qaboos University, aiming to identify areas for improved integration of sustainability practices in higher education. Design: A descriptive cross-sectional study, conducted via an online questionnaire, examines perceptions and attitudes toward environmental sustainability among students at Sultan Qaboos University, Muscat, Oman. The survey instrument employs a 5-point Likert scale to assess six key areas: awareness, concern, attitude, willingness to participate, current behaviors, and recommendations for enhancing campus sustainability initiatives. A convenience sample of 200 students was initially targeted, with 157 students ultimately responding between September and November 2024. Eligible participants included Undergraduate and graduate students who consented after being fully informed of the study objectives and design were included, while those who withdrew or refused participation were excluded. Following ethical approval, data collection was carried out through Google Forms. SPSS Version 23 was used for descriptive and inferential analyses, including Pearson’s correlation, chi-square, and Fisher's exact test, to explore associations among key variables. Findings: Preliminary analysis indicates that 68% of participants are familiar with core environmental sustainability (ES) concepts, including the Sustainable Development Goals (SDGs), and express high concern regarding environmental issues. However, only 47% report active involvement in campus-led ES initiatives, underscoring an engagement gap. Over 70% of respondents believe that sustainability should be prioritized as a university policy, and 62% expressed willingness to participate in additional ES-related programs. Despite this interest, 58% advocated for more sustainability-focused courses in their curriculum, suggesting current offerings are insufficient. Statistical analysis revealed a significant positive correlation between ES awareness and willingness to engage in sustainable practices (p < 0.05). These findings highlight the need for expanded institutional efforts, including targeted programs and curriculum integration, to cultivate a more sustainability-centered culture among students. Conclusion: The results emphasize that while students demonstrate a strong foundational awareness of ES, greater institutional support is essential to transform this awareness into active engagement. More comprehensive integration of sustainability within academic programs and campus life could substantially enhance students’ involvement and commitment to environmental stewardship.

Keywords: environmental sustainability, higher education, students, perceptions, Sultan Qaboos University.

Procedia PDF Downloads 8
304 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 70
303 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 127
302 The ‘Fun, Move, Play’ Project: Qualitative and Quantitative Findings from Irish Primary School Children (6-8 Years), Parents and Teachers

Authors: Jemma McGourty, Brid Delahunt, Fiona Hackett, Sharon Courtney, Richard English, Graham Russell, Sinéad O’Connor

Abstract:

Fundamental Movement Skills (FMS) mastery is considered essential for children’s ongoing, meaningful engagement in Physical Activity (PA). There has been a dearth of Irish research on baseline FMS and their development by means of intervention in young primary school children. In addition, as children’s participation in PA is heavily influenced by both parents and teachers, it is imperative to understand their attitudes and perceptions towards PA participation and its’ promotion in children. The ‘Fun, Move, Play’ Project investigated the effect of a 6-week play based PA intervention on primary school children’s (aged 6-8 years) FMS while also exploring the attitudes and perceptions of their parents and teachers towards PA participation. The FMS intervention utilised a pre-post quasi-experimental design to determine the effect of a 6-week play based PA intervention (devised from the iCoach Kids Programme) on 176 primary school children’s FMS (N = 176: 90 girls and 86 boys; M = 7.2 years; SD = 0.48). Objective measures of 7 FMS (run, skip, vertical jump, static balance, stationary dribble, catch, kick) were made using a combination of the TGMD2 and Get Skilled, Get Active resources. One hundred parents (87 mothers; 13 fathers; M=36 years; SD=5.45) and 90 teachers (67 females; 23 males) completed surveys investigating their attitudes and perceptions towards PA participation. In addition, 19 of these parents and 9 of these teachers participated in semi-structured qualitative interviews to explore, in more depth, their views and perceptions of PA participation. Both the FMS data set and survey responses were analysed using SPSS version 23, using appropriate statistical analysis. A thematic analysis framework was used to analyse the qualitative findings. A significant improvement was observed in the children’s overall FMS score pre-post intervention (t = 16.67; df = 175; p < 0.001), while there were also significant improvements in each of the seven individual FMS measured in the children, pre-post intervention. Findings from the parent surveys and interviews indicated that parents had positive attitudes towards PA, viewed it as important and supported their child’s PA participation. However, a lack of knowledge regarding the amount and intensity of PA that children should participate in emerged as a recurrent finding. Also, there was a significant positive correlation between the PA levels of parents’ and their children (r = .41; n = 100; p < .001). Arising from the teachers’ surveys and interviews was a positive attitude towards PA and the impact that it has on a child’s health and well-being. They also reported feeling more confident teaching certain aspects of the PE curriculum (games and sports) compared to others (gymnastics, dance), where they appreciate working with specialist practitioners. Conclusion: A short-term PA intervention has a positive effect on children’s FMS. While parents are supportive of their child’s PA participation, there is a knowledge gap regarding National PA guidelines for children. Teachers appreciate the importance of PA in children, but face a number of challenges in its implementation and promotion.

Keywords: fundamental movement skills, parents attitudes to physical activity, short-term intervention, teachers attitudes to physical activity

Procedia PDF Downloads 179
301 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 167