Search results for: microwave techniques
901 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 94900 Biodeterioration of Historic Parks of UK by Algae
Authors: Syeda Fatima Manzelat
Abstract:
This chapter investigates the biodeterioration of parks in the UK caused by lichens, focusing on Campbell Park and Great Linford Manor Park in Milton Keynes. The study first isolates and identifies potent biodeteriogens responsible for potential biodeterioration in these parks, enumerating and recording different classes and genera of lichens known for their biodeteriorative properties. It then examines the implications of lichens on biodeterioration at historic sites within these parks, considering impacts on historic structures, the environment, and associated health risks. Conservation strategies and preventive measures are discussed before concluding.Lichens, characterized by their symbiotic association between a fungus and an alga, thrive on various surfaces including building materials, soil, rock, wood, and trees. The fungal component provides structure and protection, while the algal partner performs photosynthesis. Lichens collected from the park sites, such as Xanthoria, Cladonia, and Arthonia, were observed affecting the historic walls, objects, and trees. Their biodeteriorative impacts were visible to the naked eye, contributing to aesthetic and structural damage. The study highlights the role of lichens as bioindicators of pollution, sensitive to changes in air quality. The presence and diversity of lichens provide insights into the air quality and pollution levels in the parks. However, lichens also pose health risks, with certain species causing respiratory issues, allergies, skin irritation, and other toxic effects in humans and animals. Conservation strategies discussed include regular monitoring, biological and chemical control methods, physical removal, and preventive cleaning. The study emphasizes the importance of a multifaceted, multidisciplinary approach to managing lichen-induced biodeterioration. Future management practices could involve advanced techniques such as eco-friendly biocides and self-cleaning materials to effectively control lichen growth and preserve historic structures. In conclusion, this chapter underscores the dual role of lichens as agents of biodeterioration and indicators of environmental quality. Comprehensive conservation management approaches, encompassing monitoring, targeted interventions, and advanced conservation methods, are essential for preserving the historic and natural integrity of parks like Campbell Park and Great Linford Manor Park.Keywords: biodeterioration, historic parks, algae, UK
Procedia PDF Downloads 32899 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 137898 Surgical Hip Dislocation of Femoroacetabular Impingement: Survivorship and Functional Outcomes at 10 Years
Authors: L. Hoade, O. O. Onafowokan, K. Anderson, G. E. Bartlett, E. D. Fern, M. R. Norton, R. G. Middleton
Abstract:
Aims: Femoroacetabular impingement (FAI) was first recognised as a potential driver for hip pain at the turn of the last millennium. While there is an increasing trend towards surgical management of FAI by arthroscopic means, open surgical hip dislocation and debridement (SHD) remains the Gold Standard of care in terms of reported outcome measures. (1) Long-term functional and survivorship outcomes of SHD as a treatment for FAI are yet to be sufficiently reported in the literature. This study sets out to help address this imbalance. Methods: We undertook a retrospective review of our institutional database for all patients who underwent SHD for FAI between January 2003 and December 2008. A total of 223 patients (241 hips) were identified and underwent a ten year review with a standardised radiograph and patient-reported outcome measures questionnaire. The primary outcome measure of interest was survivorship, defined as progression to total hip arthroplasty (THA). Negative predictive factors were analysed. Secondary outcome measures of interest were survivorship to further (non-arthroplasty) surgery, functional outcomes as reflected by patient reported outcome measure scores (PROMS) scores, and whether a learning curve could be identified. Results: The final cohort consisted of 131 females and 110 males, with a mean age of 34 years. There was an overall native hip joint survival rate of 85.4% at ten years. Those who underwent a THA were significantly older at initial surgery, had radiographic evidence of preoperative osteoarthritis and pre- and post-operative acetabular undercoverage. In those whom had not progressed to THA, the average Non-arthritic Hip Score and Oxford Hip Score at ten year follow-up were 72.3% and 36/48, respectively, and 84% still deemed their surgery worthwhile. A learning curve was found to exist that was predicated on case selection rather than surgical technique. Conclusion: This is only the second study to evaluate the long-term outcomes (beyond ten years) of SHD for FAI and the first outside the originating centre. Our results suggest that, with correct patient selection, this remains an operation with worthwhile outcomes at ten years. How the results of open surgery compared to those of arthroscopy remains to be answered. While these results precede the advent of collison software modelling tools, this data helps set a benchmark for future comparison of other techniques effectiveness at the ten year mark.Keywords: femoroacetabular impingement, hip pain, surgical hip dislocation, hip debridement
Procedia PDF Downloads 84897 Engage, Connect, Empower: Agile Approach in the University Students' Education
Authors: D. Bjelica, T. Slavinski, V. Vukimrovic, D. Pavlovic, D. Bodroza, V. Dabetic
Abstract:
Traditional methods and techniques used in higher education may be significantly persuasive on the university students' perception about quality of the teaching process. Students’ satisfaction with the university experience may be affected by chosen educational approaches. Contemporary project management trends recognize agile approaches' beneficial, so modern practice highlights their usage, especially in the IT industry. A key research question concerns the possibility of applying agile methods in youth education. As agile methodology pinpoint iteratively-incremental delivery of results, its employment could be remarkably fruitful in education. This paper demonstrates the agile concept's application in the university students’ education through the continuous delivery of student solutions. Therefore, based on the fundamental values and principles of the agile manifest, paper will analyze students' performance and learned lessons in their encounter with the agile environment. The research is based on qualitative and quantitative analysis that includes sprints, as preparation and realization of student tasks in shorter iterations. Consequently, the performance of student teams will be monitored through iterations, as well as the process of adaptive planning and realization. Grounded theory methodology has been used in this research, as so as descriptive statistics and Man Whitney and Kruskal Wallis test for group comparison. Developed constructs of the model will be showcase through qualitative research, then validated through a pilot survey, and eventually tested as a concept in the final survey. The paper highlights the variability of educational curricula based on university students' feedbacks, which will be collected at the end of every sprint and indicates to university students' satisfaction inconsistency according to approaches applied in education. Values delivered by the lecturers will also be continuously monitored; thus, it will be prioritizing in order to students' requests. Minimal viable product, as the early delivery of results, will be particularly emphasized in the implementation process. The paper offers both theoretical and practical implications. This research contains exceptional lessons that may be applicable by educational institutions in curriculum creation processes, or by lecturers in curriculum design and teaching. On the other hand, they can be beneficial regarding university students' satisfaction increscent in respect of teaching styles, gained knowledge, or even educational content.Keywords: academic performances, agile, high education, university students' satisfaction
Procedia PDF Downloads 129896 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 260895 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 343894 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide
Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh
Abstract:
Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration
Procedia PDF Downloads 143893 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 63892 Investigating the Impact of Enterprise Resource Planning System and Supply Chain Operations on Competitive Advantage and Corporate Performance (Case Study: Mamot Company)
Authors: Mohammad Mahdi Mozaffari, Mehdi Ajalli, Delaram Jafargholi
Abstract:
The main purpose of this study is to investigate the impact of the system of ERP (Enterprise Resource Planning) and SCM (Supply Chain Management) on the competitive advantage and performance of Mamot Company. The methods for collecting information in this study are library studies and field research. A questionnaire was used to collect the data needed to determine the relationship between the variables of the research. This questionnaire contains 38 questions. The direction of the current research is applied. The statistical population of this study consists of managers and experts who are familiar with the SCM system and ERP. Number of statistical society is 210. The sampling method is simple in this research. The sample size is 136 people. Also, among the distributed questionnaires, Reliability of the Cronbach's Alpha Cronbach's Questionnaire is evaluated and its value is more than 70%. Therefore, it confirms reliability. And formal validity has been used to determine the validity of the questionnaire, and the validity of the questionnaire is confirmed by the fact that the score of the impact is greater than 1.5. In the present study, one variable analysis was used for central indicators, dispersion and deviation from symmetry, and a general picture of the society was obtained. Also, two variables were analyzed to test the hypotheses; measure the correlation coefficient between variables using structural equations, SPSS software was used. Finally, multivariate analysis was used with statistical techniques related to the SPLS structural equations to determine the effects of independent variables on the dependent variables of the research to determine the structural relationships between the variables. The results of the test of research hypotheses indicate that: 1. Supply chain management practices have a positive impact on the competitive advantage of the Mammoth industrial complex. 2. Supply chain management practices have a positive impact on the performance of the Mammoth industrial complex. 3. Planning system Organizational resources have a positive impact on the performance of the Mammoth industrial complex. 4. The system of enterprise resource planning has a positive impact on Mamot's competitive advantage. 5.The competitive advantage has a positive impact on the performance of the Mammoth industrial complex 6.The system of enterprise resource planning Mamot Industrial Complex Supply Chain Management has a positive impact. The above results indicate that the system of enterprise resource planning and supply chain management has an impact on the competitive advantage and corporate performance of Mamot Company.Keywords: enterprise resource planning, supply chain management, competitive advantage, Mamot company performance
Procedia PDF Downloads 98891 Response of Planktonic and Aggregated Bacterial Cells to Water Disinfection with Photodynamic Inactivation
Authors: Thayse Marques Passos, Brid Quilty, Mary Pryce
Abstract:
The interest in developing alternative techniques to obtain safe water, free from pathogens and hazardous substances, is growing in recent times. The photodynamic inactivation of microorganisms (PDI) is a promising ecologically-friendly and multi-target approach for water disinfection. It uses visible light as an energy source combined with a photosensitiser (PS) to transfer energy/electrons to a substrate or molecular oxygen generating reactive oxygen species, which cause cidal effects towards cells. PDI has mainly been used in clinical studies and investigations on its application to disinfect water is relatively recent. The majority of studies use planktonic cells. However, in their natural environments, bacteria quite often do not occur as freely suspended cells (planktonic) but in cell aggregates that are either freely floating or attached to surfaces as biofilms. Microbes can form aggregates and biofilms as a strategy to protect them from environmental stress. As aggregates, bacteria have a better metabolic function, they communicate more efficiently, and they are more resistant to biocide compounds than their planktonic forms. Among the bacteria that are able to form aggregates are members of the genus Pseudomonas, they are a very diverse group widely distributed in the environment. Pseudomonas species can form aggregates/biofilms in water and can cause particular problems in water distribution systems. The aim of this study was to evaluate the effectiveness of photodynamic inactivation in killing a range of planktonic cells including Escherichia coli DSM 1103, Staphylococcus aureus DSM 799, Shigella sonnei DSM 5570, Salmonella enterica and Pseudomonas putida DSM 6125, and aggregating cells of Pseudomonas fluorescens DSM 50090, Pseudomonas aeruginosa PAO1. The experiments were performed in glass Petri dishes, containing the bacterial suspension and the photosensitiser, irradiated with a multi-LED (wavelengths 430nm and 660nm) for different time intervals. The responses of the cells were monitored using the pour plate technique and confocal microscopy. The study showed that bacteria belonging to Pseudomonads group tend to be more tolerant to PDI. While E. coli, S. aureus, S. sonnei and S. enterica required a dosage ranging from 39.47 J/cm2 to 59.21 J/cm2 for a 5 log reduction, Pseudomonads needed a dosage ranging from 78.94 to 118.42 J/cm2, a higher dose being required when the cells aggregated.Keywords: bacterial aggregation, photoinactivation, Pseudomonads, water disinfection
Procedia PDF Downloads 296890 Validity and Reliability of Communication Activities of Daily Living- Second Edition and Assessment of Language-related Functional Activities: Comparative Evidence from Arab Aphasics
Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari
Abstract:
Background: Validation of communication activities of daily living-second edition (CADL-2) and assessment of language-related functional activities (ALFA) tests is a critical investment decision, and activities related to language impairments often are underestimated. Literature indicates that age factors, and gender differences may affect the performance of the aphasics. Thus, understanding these influential factors is highly important to neuropsycholinguists and speech language pathologists (SLPs). Purpose: The goal of this study is twofold: (1) to in/validate CADL-2 and ALFA tests, and (2) to investigate whether or not the two assessment tests are reliable. Design: A comparative study is made between the results obtained from the analyses of the Arabic versions of CADL-2 and ALFA tests. Participants: The communication activities of daily-living and language-related functional activities were assessed from the obtained results of 100 adult aphasics (50 males, 50 females; ages 16 to 65). Procedures: Firstly, the two translated and standardized Arabic versions of CADL-2 and ALFA tests were introduced to the Arab aphasics under investigation. Armed with the new two versions of the tests, one of the researchers assessed the language-related functional communication and activities. Outcomes drawn from the obtained analysis of the comparative studies were then qualitatively and statistically analyzed. Main outcomes and Results: Regarding the validity of CADL-2 and ALFA, it is found that …. Is more valid in both pre-and posttests. Concerning the reliability of the two tests, it is found that ….is more reliable in both pre-and-posttests which undoubtedly means that …..is more trustable. Nor must we forget to indicate here that the relationship between age and gender was very weak due to that no remarkable gender differences between the two in both CADL-2 and ALFA pre-and-posttests. Conclusions & Implications: CADL-2 and ALFA tests were found to be valid and reliable tests. In contrast to previous studies, age and gender were not significantly associated with the results of validity and reliability of the two assessment tests. In clearer terms, age and gender patterns do not affect the validation of these two tests. Future studies might focus on complex questions including the use of CADL-2 and ALFA functionally; how gender and puberty influence the results in case the sample is large; the effects of each type of aphasia on the final outcomes, and measurements’ results of imaging techniques.Keywords: CADL-2, ALFA, comparison, language test, arab aphasics, validity, reliability, neuropsycholinguistics, comparison
Procedia PDF Downloads 36889 Polyvinyl Alcohol Incorporated with Hibiscus Extract Microcapsules as Combined Active and Intelligent Composite Film for Meat Preservation
Authors: Ahmed F. Ghanem, Marwa I. Wahba, Asmaa N. El-Dein, Mohamed A. EL-Raey, Ghada E.A. Awad
Abstract:
Numerous attempts are being performed in order to formulate suitable packaging materials for meat products. However, to the best of our knowledge, the incorporation of free hibiscus extract or its microcapsules in the pure polyvinyl alcohol (PVA) matrix as packaging materials for meats is seldom reported. Therefore, this study aims at protection of the aqueous crude extract of hibiscus flowers utilizing spry drying encapsulation technique. Fourier transform infrared (FTIR), scanning electron microscope (SEM), and zetasizer results confirmed the successful formation of assembled capsules via strong interactions, spherical rough microparticles, and ~ 235 nm of particle size, respectively. Also, the obtained microcapsules enjoy high thermal stability, unlike the free extract. Then, the obtained spray-dried particles were incorporated into the casting solution of the pure PVA film with a concentration 10 wt. %. The segregated free-standing composite films were investigated, compared to the neat matrix, with several characterization techniques such as FTIR, SEM, thermal gravimetric analysis (TGA), mechanical tester, contact angle, water vapor permeability, and oxygen transmission. The results demonstrated variations in the physicochemical properties of the PVA film after the inclusion of the free and the extract microcapsules. Moreover, biological studies emphasized the biocidal potential of the hybrid films against microorganisms contaminating the meat. Specifically, the microcapsules imparted not only antimicrobial but also antioxidant activities to PVA. Application of the prepared films on the real meat samples displayed low bacterial growth with a slight increase in the pH over the storage time up to 10 days at 4 oC which further proved the meat safety. Moreover, the colors of the films did not significantly changed except after 21 days indicating the spoilage of the meat samples. No doubt, the dual-functional of prepared composite films pave the way towards combined active/smart food packaging applications. This would play a vital role in the food hygiene, including also quality control and assurance.Keywords: PVA, hibiscus, extraction, encapsulation, active packaging, smart and intelligent packaging, meat spoilage
Procedia PDF Downloads 82888 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System
Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng
Abstract:
Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging
Procedia PDF Downloads 366887 Perovskite Nanocrystals and Quantum Dots: Advancements in Light-Harvesting Capabilities for Photovoltaic Technologies
Authors: Mehrnaz Mostafavi
Abstract:
Perovskite nanocrystals and quantum dots have emerged as leaders in the field of photovoltaic technologies, demonstrating exceptional light-harvesting abilities and stability. This study investigates the substantial progress and potential of these nano-sized materials in transforming solar energy conversion. The research delves into the foundational characteristics and production methods of perovskite nanocrystals and quantum dots, elucidating their distinct optical and electronic properties that render them well-suited for photovoltaic applications. Specifically, it examines their outstanding light absorption capabilities, enabling more effective utilization of a wider solar spectrum compared to traditional silicon-based solar cells. Furthermore, this paper explores the improved durability achieved in perovskite nanocrystals and quantum dots, overcoming previous challenges related to degradation and inconsistent performance. Recent advancements in material engineering and techniques for surface passivation have significantly contributed to enhancing the long-term stability of these nanomaterials, making them more commercially feasible for solar cell usage. The study also delves into the advancements in device designs that incorporate perovskite nanocrystals and quantum dots. Innovative strategies, such as tandem solar cells and hybrid structures integrating these nanomaterials with conventional photovoltaic technologies, are discussed. These approaches highlight synergistic effects that boost efficiency and performance. Additionally, this paper addresses ongoing challenges and research endeavors aimed at further improving the efficiency, stability, and scalability of perovskite nanocrystals and quantum dots in photovoltaics. Efforts to mitigate concerns related to material degradation, toxicity, and large-scale production are actively pursued, paving the way for broader commercial application. In conclusion, this paper emphasizes the significant role played by perovskite nanocrystals and quantum dots in advancing photovoltaic technologies. Their exceptional light-harvesting capabilities, combined with increased stability, promise a bright future for next-generation solar cells, ushering in an era of highly efficient and cost-effective solar energy conversion systems.Keywords: perovskite nanocrystals, quantum dots, photovoltaic technologies, light-harvesting, solar energy conversion, stability, device designs
Procedia PDF Downloads 97886 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number
Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson
Abstract:
An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition
Procedia PDF Downloads 161885 Randomized Controlled Trial of Ultrasound Guided Bilateral Intermediate Cervical Plexus Block in Thyroid Surgery
Authors: Neerja Bharti, Drishya P.
Abstract:
Introduction: Thyroidectomies are extensive surgeries involving a significant degree of tissue handling and dissection and are associated with considerable postoperative pain. Regional anaesthesia techniques have immerged as possible inexpensive and safe alternatives to opioids in the management of pain after thyroidectomy. The front of the neck is innervated by branches from the cervical plexus, and hence, several approaches for superficial and deep cervical plexus block (CPB) have been described to provide postoperative analgesia after neck surgery. However, very few studies have explored the analgesic efficacy of intermediate CPB for thyroid surgery. In this study, we have evaluated the effects of ultrasound-guided bilateral intermediate CPB on perioperative opioid consumption in patients undergoing thyroidectomy under general anesthesia. Methods: In this prospective randomized controlled study, fifty ASA grade I-II adult patients undergoing thyroidectomy were randomly divided into two groups: the study group received ultrasound-guided bilateral intermediate CPB with 10 ml 0.5% ropivacaine on each side, while the control group received the same block with 10 ml normal saline on each side just after induction of anesthesia. Anesthesia was induced with propofol, fentanyl, and vecuronium and maintained with propofol infusion titrated to maintain the BIS between 40 and 60. During the postoperative period, rescue analgesia was provided with PCA fentanyl, and the pain scores, total fentanyl consumption, and incidence of nausea and vomiting during 24 hours were recorded, and overall patient satisfaction was assessed. Results: The groups were well-matched with respect to age, gender, BMI, and duration of surgery. The difference in intraoperative propofol and fentanyl consumption was not statistically significant between groups. However, the intraoperative haemodynamic parameters were better maintained in the study group than in the control group. The postoperative pain scores, as measured by VAS at rest and during movement, were lower, and the total fentanyl consumption during 24 hours was significantly less in the study group as compared to the control group. Patients in the study group reported better satisfaction scores than those in the control group. No adverse effects of ultrasound-guided intermediate CPB block were reported. Conclusion: We concluded that ultrasound-guided intermediate cervical plexus block is a safe and effective method for providing perioperative analgesia during thyroid surgery.Keywords: thyroidectomy, cervical plexus block, pain relief, opioid consumption
Procedia PDF Downloads 96884 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations
Authors: K. Al Ammari, B. G. Clarke
Abstract:
Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column
Procedia PDF Downloads 374883 Optimal Capacitors Placement and Sizing Improvement Based on Voltage Reduction for Energy Efficiency
Authors: Zilaila Zakaria, Muhd Azri Abdul Razak, Muhammad Murtadha Othman, Mohd Ainor Yahya, Ismail Musirin, Mat Nasir Kari, Mohd Fazli Osman, Mohd Zaini Hassan, Baihaki Azraee
Abstract:
Energy efficiency can be realized by minimizing the power loss with a sufficient amount of energy used in an electrical distribution system. In this report, a detailed analysis of the energy efficiency of an electric distribution system was carried out with an implementation of the optimal capacitor placement and sizing (OCPS). The particle swarm optimization (PSO) will be used to determine optimal location and sizing for the capacitors whereas energy consumption and power losses minimization will improve the energy efficiency. In addition, a certain number of busbars or locations are identified in advance before the PSO is performed to solve OCPS. In this case study, three techniques are performed for the pre-selection of busbar or locations which are the power-loss-index (PLI). The particle swarm optimization (PSO) is designed to provide a new population with improved sizing and location of capacitors. The total cost of power losses, energy consumption and capacitor installation are the components considered in the objective and fitness functions of the proposed optimization technique. Voltage magnitude limit, total harmonic distortion (THD) limit, power factor limit and capacitor size limit are the parameters considered as the constraints for the proposed of optimization technique. In this research, the proposed methodologies implemented in the MATLAB® software will transfer the information, execute the three-phase unbalanced load flow solution and retrieve then collect the results or data from the three-phase unbalanced electrical distribution systems modeled in the SIMULINK® software. Effectiveness of the proposed methods used to improve the energy efficiency has been verified through several case studies and the results are obtained from the test systems of IEEE 13-bus unbalanced electrical distribution system and also the practical electrical distribution system model of Sultan Salahuddin Abdul Aziz Shah (SSAAS) government building in Shah Alam, Selangor.Keywords: particle swarm optimization, pre-determine of capacitor locations, optimal capacitors placement and sizing, unbalanced electrical distribution system
Procedia PDF Downloads 434882 Modification of Carbon-Based Gas Sensors for Boosting Selectivity
Authors: D. Zhao, Y. Wang, G. Chen
Abstract:
Gas sensors that utilize carbonaceous materials as sensing media offer numerous advantages, making them the preferred choice for constructing chemical sensors over those using other sensing materials. Carbonaceous materials, particularly nano-sized ones like carbon nanotubes (CNTs), provide these sensors with high sensitivity. Additionally, carbon-based sensors possess other advantageous properties that enhance their performance, including high stability, low power consumption for operation, and cost-effectiveness in their construction. These properties make carbon-based sensors ideal for a wide range of applications, especially in miniaturized devices created through MEMS or NEMS technologies. To capitalize on these properties, a group of chemoresistance-type carbon-based gas sensors was developed and tested against various volatile organic compounds (VOCs) and volatile inorganic compounds (VICs). The results demonstrated exceptional sensitivity to both VOCs and VICs, along with the sensor’s long-term stability. However, this broad sensitivity also led to poor selectivity towards specific gases. This project aims at addressing the selectivity issue by modifying the carbon-based sensing materials and enhancing the sensor's specificity to individual gas. Multiple groups of sensors were manufactured and modified using proprietary techniques. To assess their performance, we conducted experiments on representative sensors from each group to detect a range of VOCs and VICs. The VOCs tested included acetone, dimethyl ether, ethanol, formaldehyde, methane, and propane. The VICs comprised carbon monoxide (CO), carbon dioxide (CO2), hydrogen (H2), nitric oxide (NO), and nitrogen dioxide (NO2). The concentrations of the sample gases were all set at 50 parts per million (ppm). Nitrogen (N2) was used as the carrier gas throughout the experiments. The results of the gas sensing experiments are as follows. In Group 1, the sensors exhibited selectivity toward CO2, acetone, NO, and NO2, with NO2 showing the highest response. Group 2 primarily responded to NO2. Group 3 displayed responses to nitrogen oxides, i.e., both NO and NO2, with NO2 slightly surpassing NO in sensitivity. Group 4 demonstrated the highest sensitivity among all the groups toward NO and NO2, with NO2 being more sensitive than NO. In conclusion, by incorporating several modifications using carbon nanotubes (CNTs), sensors can be designed to respond well to NOx gases with great selectivity and without interference from other gases. Because the response levels to NO and NO2 from each group are different, the individual concentration of NO and NO2 can be deduced.Keywords: gas sensors, carbon, CNT, MEMS/NEMS, VOC, VIC, high selectivity, modification of sensing materials
Procedia PDF Downloads 127881 The Reasons for Failure in Writing Essays: Teaching Writing as a Project-Based Enterprise
Authors: Ewa Toloczko
Abstract:
Studies show that developing writing skills throughout years of formal foreign language instruction does not necessarily result in rewarding accomplishments among learners, nor an affirmative attitude they build towards written assignments. What causes this apparently wide-spread bias to writing might be a diminished relevance students attach to it, as opposed to the other productive skill — speaking, insufficient resources available for them to succeed, or the ways writing is approached by instructors, that is inapt teaching techniques that discourage rather that inflame learners’ engagement. The assumption underlying this presentation is that psychological and psycholinguistic factors constitute a key dimension of every writing process, and hence should be seriously considered in both material design and lesson planning. The author intends to demonstrate research in which writing tasks were conceived of as attitudinal rather than technical operations, and consequently turned into meaningful and socially-oriented incidents that students could relate to and have an active hand in. The instrument employed to achieve this purpose and to make writing even more interactive was the format of a project, a carefully devised series of tasks, which involved students as human beings, not only language learners. The projects rested upon the premise that the presence of peers and the teacher in class could be taken advantage of in a supportive rather than evaluative mode. In fact, the research showed that collaborative work and constant meaning negotiation reinforced not only bonds between learners, but also the language form and structure of the output. Accordingly, the role of the teacher shifted from the assessor to problem barometer, always ready to accept the slightest improvements in students’ language performance. This way, written verbal communication, which usually aims to merely manifest accuracy and coherent content for assessment, became part of the enterprise meant to emphasise its social aspect — the writer in real-life setting. The samples of projects show the spectrum of possibilities teachers have when exploring the domain of writing within school curriculum. The ideas are easy to modify and adjust to all proficiency levels and ages. Initially, however, they were meant to suit teenage and young adult learners of English as a foreign language in both European and Asian contexts.Keywords: projects, psycholinguistic/ psychological dimension of writing, writing as a social enterprise, writing skills, written assignments
Procedia PDF Downloads 234880 Rare Case of Three Metachronous Cancers Occurring over the Period of Three Years: Clinical Importance of Investigating Neoplastic Growth Discovered during Follow-Up
Authors: Marin Kanarev, Delyan Stoyanov, Ivanna Popova, Nadezhda Petrova
Abstract:
Thanks to increased survival rates in patients bearing oncological malignancies due to recent developments in anti-cancer therapies and diagnostic techniques, observation of clinical cases of metachronous cancers is more common and can provide more in-depth knowledge of their development and, as a result, help clinicians apply suitable therapy. This unusual case of three metachronous tumors presented the opportunity to follow their occurrence, progression, and treatment thoroughly. A 77-year-old male presented with carcinoma ventriculi of the pylorus region, which was surgically removed via upper subtotal stomach resection, a lateral antecolical gastro-enteroanastomosis, and a subsequent Braun anastomosis. An EOX chemotherapy regimen followed. A CT scan four months later showed no indication of recurrence or dissemination. The same scan, performed as a part of the follow-up plan two years later, showed an indication of neoplastic growth in the urinary bladder. After the patient had been directed to a urologist, the suspicion was confirmed, and the growth was histologically diagnosed as a carcinoma of the urinary bladder. An immunohistochemistry test showed an expression of PDL1 of less than 5%, which resulted in treatment with GemCis chemotherapy regimen that led to full remission. Two years and seven months after the first surgery, a CT scan showed again that the two carcinomas were gone. However, four months later, elevated tumor markers prompted a PET/CT scan, which showed data indicative of recurring neoplastic growth in the region of the stomach cardia. It was diagnosed as an adenocarcinoma infiltrating the esophagus. Preoperative chemotherapy with the ECF regimen was completed in four courses, and a CT scan showed no progression of the disease. In less than a month after therapy, the patient underwent laparotomy, debridement, gastrectomy, and a subsequent mechanical terminal-lateral esophago-jejunoanasthomosis. It was verified that the tumor originated from metastasis from the carcinoma ventriculi, which was located in the pylorus. In conclusion, this case report highlights the importance of patient follow-up and studying recurring neoplastic growth. Despite the absence of symptoms, clinicians should maintain a high level of suspicion when evaluating the patient data and choosing the most suitable therapy.Keywords: carcinoma, follow-up, metachronous, neoplastic growth, recurrence
Procedia PDF Downloads 88879 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117878 Adjustment with Changed Lifestyle at Old Age Homes: A Perspective of Elderly in India
Authors: Priyanka V. Janbandhu, Santosh B. Phad, Dhananjay W. Bansod
Abstract:
The current changing scenario of the family is a compelling aged group not only to be alone in a nuclear family but also to join the old age institutions. The consequences of it are feeling of neglected or left alone by the children, adding a touch of helpless in the absence of lack of expected care and support. The accretion of all these feelings and unpleasant events ignite a question in their mind that – who is there for me? The efforts have taken to highlight the issues of the elderly after joining the old age home and their perception about the current life as an institutional inmate. This attempt to cover up the condition, adjustment, changed lifestyle and perspective in the association with several issues of the elderly, which have an essential effect on their well-being. The present research piece has collected the information about institutionalized elderly with the help of a semi-structured questionnaire. This study interviewed 500 respondents from 22 old age homes of Pune city of Maharashtra State, India. This data collection methodology consists of Multi-stage random sampling. In which the stratified random sampling adopted for the selection of old age homes and sample size determination, sample selection probability proportional to the size and simple random sampling techniques implemented. The study provides that around five percent of the elderly shifted to old age home along with their spouse, whereas ten percent of the elderly are staying away from their spouse. More than 71 percent of the elderly have children, and they are an involuntary inmate of the old age institution, even less than one-third of the elderly consulted to the institution before the joining it. More than sixty percent of the elderly have children, but they joined institution due to the unpleasant response of their children only. Around half of the elderly responded that there are issues while adjusting to this environment, many of them are still persistent. At least one elderly out of ten is there who is suffering from the feeling of loneliness and left out by children and other family members. In contrast, around 97 percent of the elderly are very happy or satisfied with the institutional facilities. It illustrates that the issues are associated with their children and other family members, even though they left their home before a year or more. When enquired about this loneliness feeling few of them are suffering from it before leaving their homes, it was due to lack of interaction with children, as they are too busy to have time for the aged parents. Additionally, the conflicts or fights within the family due to the presence of old persons in the family contributed to establishing another feeling of insignificance among the elderly parents. According to these elderly, have more than 70 percent of the share, the children are ready to spend money indirectly for us through these institutions, but not prepared to provide some time and very few amounts of all this expenditure directly for us.Keywords: elderly, old age homes, life style changes and adjustment, India
Procedia PDF Downloads 134877 Simultaneous Detection of Cd⁺², Fe⁺², Co⁺², and Pb⁺² Heavy Metal Ions by Stripping Voltammetry Using Polyvinyl Chloride Modified Glassy Carbon Electrode
Authors: Sai Snehitha Yadavalli, K. Sruthi, Swati Ghosh Acharyya
Abstract:
Heavy metal ions are toxic to humans and all living species when exposed in large quantities or for long durations. Though Fe acts as a nutrient, when intake is in large quantities, it becomes toxic. These toxic heavy metal ions, when consumed through water, will cause many disorders and are harmful to all flora and fauna through biomagnification. Specifically, humans are prone to innumerable diseases ranging from skin to gastrointestinal, neurological, etc. In higher quantities, they even cause cancer in humans. Detection of these toxic heavy metal ions in water is thus important. Traditionally, the detection of heavy metal ions in water has been done by techniques like Inductively Coupled Plasma Mass Spectroscopy (ICPMS) and Atomic Absorption Spectroscopy (AAS). Though these methods offer accurate quantitative analysis, they require expensive equipment and cannot be used for on-site measurements. Anodic Stripping Voltammetry is a good alternative as the equipment is affordable, and measurements can be made at the river basins or lakes. In the current study, Square Wave Anodic Stripping Voltammetry (SWASV) was used to detect the heavy metal ions in water. Literature reports various electrodes on which deposition of heavy metal ions was carried out like Bismuth, Polymers, etc. The working electrode used in this study is a polyvinyl chloride (PVC) modified glassy carbon electrode (GCE). Ag/AgCl reference electrode and Platinum counter electrode were used. Biologic Potentiostat SP 300 was used for conducting the experiments. Through this work of simultaneous detection, four heavy metal ions were successfully detected at a time. The influence of modifying GCE with PVC was studied in comparison with unmodified GCE. The simultaneous detection of Cd⁺², Fe⁺², Co⁺², Pb⁺² heavy metal ions was done using PVC modified GCE by drop casting 1 wt.% of PVC dissolved in Tetra Hydro Furan (THF) solvent onto GCE. The concentration of all heavy metal ions was 0.2 mg/L, as shown in the figure. The scan rate was 0.1 V/s. Detection parameters like pH, scan rate, temperature, time of deposition, etc., were optimized. It was clearly understood that PVC helped in increasing the sensitivity and selectivity of detection as the current values are higher for PVC-modified GCE compared to unmodified GCE. The peaks were well defined when PVC-modified GCE was used.Keywords: cadmium, cobalt, electrochemical sensing, glassy carbon electrodes, heavy metal Ions, Iron, lead, polyvinyl chloride, potentiostat, square wave anodic stripping voltammetry
Procedia PDF Downloads 102876 Processes and Application of Casting Simulation and Its Software’s
Authors: Surinder Pal, Ajay Gupta, Johny Khajuria
Abstract:
Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes
Procedia PDF Downloads 475875 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation
Authors: Onur Ozveri, Korkut Karabag, Cagri Keles
Abstract:
It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance
Procedia PDF Downloads 194874 Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools
Authors: Raymond K. Jonkers
Abstract:
The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.Keywords: management, systems, performance, scorecard
Procedia PDF Downloads 322873 Knowledge Management in Public Sector Employees: A Case Study of Training Participants at National Institute of Management, Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
The purpose of this study is to investigate the current level of knowledge mapping skills of the public sector employees in Pakistan. National Institute of Management is one of the premiere public sector training organization for mid-career public sector employees in Pakistan. This study is conducted on participants of fourteen weeks long training course called Mid-Career Management Course (MCMC) which is mandatory for public sector employees in order to ascertain how to enhance their knowledge mapping skills. Methodology: Researcher used both qualitative and quantitative approach to conduct this study. Primary data about current level of participants’ understanding of knowledge mapping was collected through structured questionnaire. Later on, Participant Observation method was used where researchers acted as part of the group to gathered data from the trainees during their performance in training activities and tasks. Findings: Respondents of the study were examined for skills and abilities to organizing ideas, helping groups to develop conceptual framework, identifying critical knowledge areas of an organization, study large networks and identifying the knowledge flow using nodes and vertices, visualizing information, represent organizational structure etc. Overall, the responses varied in different skills depending on the performance and presentations. However, generally all participants have demonstrated average level of using both the IT and Non-IT K-mapping tools and techniques during simulation exercises, analysis paper de-briefing, case study reports, post visit presentation, course review, current issue presentation, syndicate meetings, and daily synopsis. Research Limitations: This study is conducted on a small-scale population of 67 public sector employees nominated by federal government to undergo 14 weeks extensive training program called MCMC (Mid-Career Management Course) at National Institute of Management, Peshawar, Pakistan. Results, however, reflects only a specific class of public sector employees i.e. working in grade 18 and having more than 5 years of work. Practical Implications: Research findings are useful for trainers, training agencies, government functionaries, and organizations working for capacity building of public sector employees.Keywords: knowledge management, km in public sector, knowledge management and professional development, knowledge management in training, knowledge mapping
Procedia PDF Downloads 254872 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method
Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon
Abstract:
The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue
Procedia PDF Downloads 249