Search results for: automation tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5233

Search results for: automation tool

1213 One Health Approach: The Importance of Improving the Identification of Waterborne Bacteria in Austrian Water

Authors: Aurora Gitto, Philipp Proksch

Abstract:

The presence of various microorganisms (bacteria, fungi) in surface water and groundwater represents an important issue for human health worldwide. The matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF-MS) has emerged as a promising and reliable tool for bacteria identification in clinical diagnostic microbiology and environmental strains thanks to an ionization technique that uses a laser energy absorbing matrix to create ions from large molecules with minimal fragmentation. The study aims first to conceptualise and set up library information and create a comprehensive database of MALDI-TOF-MS spectra from environmental water samples. The samples were analysed over a year (2021-2022) using membrane filtration methodology (0.45 μm and 0.22 μm) and then isolated on R2A agar for a period of 5 days and Yeast extract agar growing at 22 °C up to 4 days and 37 °C for 48 hours. The undetected organisms by MALDI-TOF-MS were analysed by PCR and then sequenced. The information obtained by the sequencing was further implemented in the MALDI-TOF-MS library. Among the culturable bacteria, the results show how the incubator temperature affects the growth of some genera instead of others, as demonstrated by Pseudomonas sp., which grows at 22 °C, compared to Bacillus sp., which is abundant at 37 °C. The bacteria community shows a variation in composition also between the media used, as demonstrated with R2A agar which has been defined by a higher presence of organisms not detected compared to YEA. Interesting is the variability of the Genus over one year of sampling and how the seasonality impacts the bacteria community; in fact, in some sampling locations, we observed how the composition changed, moving from winter to spring and summer. In conclusion, the bacteria community in groundwater and river bank filtration represents important information that needs to be added to the library to simplify future water quality analysis but mainly to prevent potential risks to human health.

Keywords: water quality, MALDI-TOF-MS, sequencing, library

Procedia PDF Downloads 65
1212 Simple Assessments to Demystify Complementary Feeding: Leveraging a Successful Literacy Initiative Assessment Approach in Gujarat, India

Authors: Smriti Pahwa, Karishma Vats, Aditi Macwan, Jija Dutt, Sumukhi Vaid

Abstract:

Age approporiate complementary feeding has been stressed upon for sound young child nutrition and appropriate growth. National Infant and Young Child Feeding guidelines, policies and programs indicate cognizance of the issue taken by the country’s government, policy makers and technical experts. However, it is important that ordinary people, the caregivers of young children too understand the importance of appropriate feeding. For this, an interface might be required where ordinary people could participate in assessing the gaps in IYCF as a first step to take subsequent action. In this context an attempt was made to extrapolate a citizen led learning level survey that has been involving around 25000 ordinary citizens to reach out to 600,000 children annually for over a decade in India. Based on this philosophy of involving ordinary people in simple assessments to produce understandable actionable evidence, a rapid diet assessment tool was developed and collected from caregivers of 90 < 3year children from two urban clusters in Ahmedabad and Baroda, Gujarat. Target sample for pilot was selected after cluster census. Around half the mothers reported that they had not yet introduced water or other fluids to their < 6 month babies. However, about a third were already feeding them food other than mother’s milk. Although complementary feeding was initiated in almost all (95%) children more than 6 months old, frequency was suboptimal in 60%; in 80% cases no measure was taken to either improve energy or nutrient density; only 33% were fed protective foods; Green Leafy Vegetables consumption was negligible (1.4%). Anganwadi food was not consumed. By engaging ordinary people to generate evidence and understand the gaps, such assessments have the potential to be used to generate useful evidence for action at scale as well as locally.

Keywords: citizen led, grass root engagement, IYCF (Infant and Young Child Feeding), rapid diet assessment, under nutrition

Procedia PDF Downloads 154
1211 Monitoring Prospective Sites for Water Harvesting Structures Using Remote Sensing and Geographic Information Systems-Based Modeling in Egypt

Authors: Shereif. H. Mahmoud

Abstract:

Egypt has limited water resources, and it will be under water stress by the year 2030. Therefore, Egypt should consider natural and non-conventional water resources to overcome such a problem. Rain harvesting is one solution. This Paper presents a geographic information system (GIS) methodology - based on decision support system (DSS) that uses remote sensing data, filed survey, and GIS to identify potential RWH areas. The input into the DSS includes a map of rainfall surplus, slope, potential runoff coefficient (PRC), land cover/use, soil texture. In addition, the outputs are map showing potential sites for RWH. Identifying suitable RWH sites implemented in the ArcGIS model environment using the model builder of ArcGIS 10.1. Based on Analytical hierarchy process (AHP) analysis taking into account five layers, the spatial extents of RWH suitability areas identified using Multi-Criteria Evaluation (MCE). The suitability model generated a suitability map for RWH with four suitability classes, i.e. Excellent, Moderate, Poor, and unsuitable. The spatial distribution of the suitability map showed that the excellent suitable areas for RWH concentrated in the northern part of Egypt. According to their averages, 3.24% of the total area have excellent and good suitability for RWH, while 45.04 % and 51.48 % of the total area are moderate and unsuitable suitability, respectively. The majority of the areas with excellent suitability have slopes between 2 and 8% and with an intensively cultivated area. The major soil type in the excellent suitable area is loam and the rainfall range from 100 up to 200 mm. Validation of the used technique depends on comparing existing RWH structures locations with the generated suitability map using proximity analysis tool of ArcGIS 10.1. The result shows that most of exiting RWH structures categorized as successful.

Keywords: rainwater harvesting (RWH), geographic information system (GIS), analytical hierarchy process (AHP), multi-criteria evaluation (MCE), decision support system (DSS)

Procedia PDF Downloads 342
1210 Glaucoma with Normal IOP, Is It True Normal Tension glaucoma or Something Else!

Authors: Sushma Tejwani, Shoruba Dinakaran, Kushal Kacha, K. Bhujang Shetty

Abstract:

Introduction and aim: It is not unusual to find patients with glaucomatous damage and normal intraocular pressure, and to label a patient as Normal tension glaucoma (NTG) majority of clinicians depend on office Intraocular pressures (IOP) recordings; hence, the concern is that whether we are missing the late night or early morning spikes in this group of patients. Also, ischemia to the optic nerve is one of the presumed causes of damage in these patients, however demonstrating the same has been a challenge. The aim of this study was to evaluate IOP variations and patterns in a series of patients with open angles, glaucomatous discs or fields but normal office IOP, and in addition to identify ischemic factors for true NTG patients. Materials & Methods: This was an observational cross- sectional study from a tertiary care centre. The patients that underwent full day DVT from Jan 2012 to April 2014 were studied. All patients underwent IOP measurement on Goldmann applanation tonometry every 3 hours for 24 hours along with a recording of the blood pressure (BP). Further patients with normal IOP throughout the 24- hour period were evaluated with a cardiologist for echocardiography and carotid Doppler. Results: There were 47 patients and a maximum number of patients studied was in the age group of 50-70 years. A biphasic IOP peak was noted for almost all the patients. Out of the 47 patients, 2 were excluded from analysis as they were on treatment.20 patients (42%) were diagnosed on DVT to have an IOP spike and were then diagnosed as open angle glaucoma and another 25 (55%) were diagnosed to have normal tension glaucoma and were subsequently advised a carotid Doppler and a cardiologists consult. Another interesting finding was that 9 patients had a nocturnal dip in their BP and 3 were found to have carotid artery stenosis. Conclusion: A continuous 24-hour monitoring of the IOP and BP is a very useful albeit mildly cumbersome tool which provides a wealth of information in cases of glaucoma presenting with normal office pressures. It is of great value in differentiating between normal tension glaucoma patients & open angle glaucoma patients. It also helps in timely diagnosis & possible intervention due to referral to a cardiologist in cases of carotid artery stenosis.

Keywords: carotid artery disease in NTG, diurnal variation of IOP, ischemia in glaucoma, normal tension glaucoma

Procedia PDF Downloads 269
1209 Fuzzy Logic-Based Approach to Predict Fault in Transformer Oil Based on Health Index Using Dissolved Gas Analysis

Authors: Kharisma Utomo Mulyodinoto, Suwarno, Ahmed Abu-Siada

Abstract:

Transformer insulating oil is a key component that can be utilized to detect incipient faults within operating transformers without taking them out of service. Dissolved gas-in-oil analysis has been widely accepted as a powerful technique to detect such incipient faults. While the measurement of dissolved gases within transformer oil samples has been standardized over the past two decades, analysis of the results is not always straightforward as it depends on personnel expertise more than mathematical formulas. In analyzing such data, the generation rate of each dissolved gas is of more concern than the absolute value of the gas. As such, history of dissolved gases within a particular transformer should be archived for future comparison. Lack of such history may lead to misinterpretation of the obtained results. IEEE C57.104-2008 standards have classified the health condition of the transformer based on the absolute value of individual dissolved gases along with the total dissolved combustible gas (TDCG) within transformer oil into 4 conditions. While the technique is easy to implement, it is considered as a very conservative technique and is not widely accepted as a reliable interpretation tool. Moreover, measured gases for the same oil sample can be within various conditions limits and hence, misinterpretation of the data is expected. To overcome this limitation, this paper introduces a fuzzy logic approach to predict the health condition of the transformer oil based on IEEE C57.104-2008 standards along with Roger ratio and IEC ratio-based methods. DGA results of 31 chosen oil samples from 469 transformer oil samples of normal transformers and pre-known fault-type transformers that were collected from Indonesia Electrical Utility Company, PT. PLN (Persero), from different voltage rating: 500/150 kV, 150/20 kV, and 70/20 kV; different capacity: 500 MVA, 60 MVA, 50 MVA, 30 MVA, 20 MVA, 15 MVA, and 10 MVA; and different lifespan, are used to test and establish the fuzzy logic model. Results show that the proposed approach is of good accuracy and can be considered as a platform toward the standardization of the dissolved gas interpretation process.

Keywords: dissolved gas analysis, fuzzy logic, health index, IEEE C57.104-2008, IEC ratio method, Roger ratio method

Procedia PDF Downloads 139
1208 Antibacterial and Cytotoxicity Activity of Cinchona Alkaloids

Authors: Alma Ramić, Mirjana Skočibušić, Renata Odžak, Tomica Hrenar, Ines Primožič

Abstract:

In an attempt to identify a new class of antimicrobial agents, the antimicrobial potential of Cinchona alkaloid derivatives was evaluated. The bark of the Cinchona trees is the source of a variety of alkaloids, among which the best known are quinine, quinidine, cinchonine and cinchonidine. They are very useful as organocatalysts in stereoselective synthesis. On the other hand, quinine is traditionally used in the treatment of malaria. Furthermore, Cinchona alkaloids possess various analgesic, anti-inflammatory and anti–arrhythmic properties as well. In this work we present the synthesis of twenty quaternary derivatives of pseudo−enantiomeric Cinchona alkaloid derivatives to evaluate their antibacterial activity. Quaternization of quinuclidine moiety was carried out with groups diverse in their size. The structures of compounds were systematically modified to obtain drug-like properties with proper physical and chemical properties and avoiding toxophore. All compounds were prepared in good yields and were characterized by standard analytical spectroscopy methods (1D and 2D NMR, IR, MS). The antibacterial activities of all compounds were evaluated against series of recent clinical isolates of antibiotic susceptible Gram-positive and resistant Gram-negative pathogens by determining their zone of inhibition and minimum inhibitory concentrations. All compounds showed good to strong broad-spectrum activity, equivalent or better in comparison with standard antibiotics used. Furthermore, seven compounds exhibited significant antibacterial efficiency against Gram-negative isolates. To visualize the results, principal component analysis was used as an additional classification tool. Cytotoxicity of compounds with different cell lines in human cell culture was determined. Based on these results, substituted quaternary Cinchona scaffold can be considered as promising new class of antimicrobials and further investigations should be performed. Supported by Croatian Science Foundation, Project No 3775 ADESIRE.

Keywords: antibacterial efficiency, cinchona alkaloids, cytotoxicity, pseudo‐enantiomers

Procedia PDF Downloads 139
1207 Neonatal Seizure Detection and Severity Identification Using Deep Convolutional Neural Networks

Authors: Biniam Seifu Debelo, Bheema Lingaiah Thamineni, Hanumesh Kumar Dasari, Ahmed Ali Dawud

Abstract:

Background: One of the most frequent neurological conditions in newborns is neonatal seizures, which may indicate severe neurological dysfunction. They may be caused by a broad range of problems with the central nervous system during or after pregnancy, infections, brain injuries, and/or other health conditions. These seizures may have very subtle or very modest clinical indications because patterns like oscillatory (spike) trains begin with relatively low amplitude and gradually increase over time. This becomes very challenging and erroneous if clinical observation is the primary basis for identifying newborn seizures. Objectives: In this study, a diagnosis system using deep convolutional neural networks is proposed to determine and classify the severity level of neonatal seizures using multichannel neonatal EEG data. Methods: Clinical multichannel EEG datasets were compiled using datasets from publicly accessible online sources. Various preprocessing steps were taken, including converting 2D time series data to equivalent waveform pictures. The proposed models underwent training, and their performance was evaluated. Results: The proposed CNN was used to perform binary classification with an accuracy of 92.6%, F1-score of 92.7%, specificity of 92.8%, and precision of 92.6%. To detect newborn seizures, this model is utilized. Using the proposed CNN model, multiclassification was performed with accuracy rates of 88.6%, specificity rates of 92.18%, F1-score rates of 85.61%, and precision rates of 88.9%. A multiclassification model is used to classify the severity level of neonatal seizures. The results demonstrated that the suggested strategy can assist medical professionals in making accurate diagnoses close to healthcare institutions. Conclusion: The developed system was capable of detecting neonatal seizures and has the potential to be used as a decision-making tool in resource-limited areas with a scarcity of expert neurologists.

Keywords: CNN, multichannel EEG, neonatal seizure, severity identification

Procedia PDF Downloads 11
1206 Competitive Advantage Challenges in the Apparel Manufacturing Industries of South Africa: Application of Porter’s Factor Conditions

Authors: Sipho Mbatha, Anne Mastament-Mason

Abstract:

South African manufacturing global competitiveness was ranked 22nd (out of 38 countries), dropped to 24th in 2013 and is expected to drop further to 25th by 2018. These impacts negatively on the industrialisation project of South Africa. For industrialization to be achieved through labour intensive industries like the Apparel Manufacturing Industries of South Africa (AMISA), South Africa needs to identify and respond to factors negatively impacting on the development of competitive advantage This paper applied factor conditions from Porter’s Diamond Model (1990) to understand the various challenges facing the AMISA. Factor conditions highlighted in Porter’s model are grouped into two groups namely, basic and advance factors. Two AMISA associations representing over 10 000 employees were interviewed. The largest Clothing, Textiles and Leather (CTL) apparel retail group was also interviewed with a government department implementing the industrialisation policy were interviewed The paper points out that while AMISA have basic factor conditions necessary for competitive advantage in the clothing and textiles industries, Advance factor coordination has proven to be a challenging task for the AMISA, Higher Education Institutions (HEIs) and government. Poor infrastructural maintenance has contributed to high manufacturing costs and poor quick response as a result of lack of advanced technologies. The use of Porter’s Factor Conditions as a tool to analyse the sector’s competitive advantage challenges and opportunities has increased knowledge regarding factors that limit the AMISA’s competitiveness. It is therefore argued that other studies on Porter’s Diamond model factors like Demand conditions, Firm strategy, structure and rivalry and Related and supporting industries can be used to analyse the situation of the AMISA for the purposes of improving competitive advantage.

Keywords: compliance rule, apparel manufacturing industry, factor conditions, advance skills and South African industrial policy

Procedia PDF Downloads 344
1205 Teaching Audiovisual Translation (AVT):Linguistic and Technical Aspects of Different Modes of AVT

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Teachers constantly need to innovate and redefine materials for their lectures, especially in areas such as Language for Specific Purposes (LSP) and Translation Studies (TS). It is therefore essential for the lecturers to be technically skilled to handle the never-ending evolution in software and technology, which are necessary elements especially in certain courses at university level. This need becomes even more evident in Audiovisual Translation (AVT) Modules and Courses. AVT has undergone considerable growth in the area of teaching and learning of languages for academic purposes. We have witnessed the development of a considerable number of masters and postgraduate courses where AVT becomes a tool for L2 learning. The teaching and learning of different AVT modes are components of undergraduate and postgraduate courses. Universities, in which AVT is offered as part of their teaching programme or training, make use of professional or free software programs. This paper presents an approach in AVT withina specific university context, in which technology is used by means of professional and nonprofessional software. Students take an AVT subject as part of their English Linguistics Master’s Degree at the Complutense University (UCM) in which they are using professional (Spot) and nonprofessional (Subtitle Workshop, Aegisub, Windows Movie Maker) software packages. The students are encouraged to develop their tasks and projects simulating authentic professional experiences and contexts in the different AVT modes: subtitling for hearing and deaf and hard of hearing population, audio description and dubbing. Selected scenes from TV series such as X-Files, Gossip girl, IT Crowd; extracts from movies: Finding Nemo, Good Will Hunting, School of Rock, Harry Potter, Up; and short movies (Vincent) were used. Hence, the complexity of the audiovisual materials used in class as well as the activities for their projects were graded. The assessment of the diverse tasks carried out by all the students are expected to provide some insights into the best way to improve their linguistic accuracy and oral and written productions with the use of different AVT modes in a very specific ESP university context.

Keywords: ESP, audiovisual translation, technology, university teaching, teaching

Procedia PDF Downloads 502
1204 Competition Law as a “Must Have” Course in Legal Education

Authors: Noemia Bessa Vilela, Jose Caramelo Gomes

Abstract:

All law student are familiarized, in the first years of their bachelor of laws with the concepts of “public goods” and “ private goods”; often, such legal concept does not exactly match such economic concept, and there are consequences are some sort of confusion being created. The list of goods that follow under each category is not exhaustive, nor are students given proper mechanisms to acknowledge that some legal fields can, on its own, be considered as a “public good”; this is the case of Competition. Legal authors consider that “competition law is used to promote public interest” and, as such, it is a “public good”; in economics theory, Competition is the first public good in a market economy, as the enabler of allocation efficiency. Competition law is the legal tool to support the proper functioning of the market economy and democracy itself. It is fact that Competition Law only applies to economic activities, still, competition is object of private litigation as an integral part of Public Law. Still, regardless of the importance of Competition Law in the economic activity and market regulation, most student complete their studies in law, join the Bar Associations and engage in their professional activities never having been given sufficient tools to deal with the increasing demands of a globalized world. The lack of knowledge of economics, market functioning and the mechanisms at their reach in order to ensure proper realization of their duties as lawyers/ attorneys-at-law would be tackled if Competition Law would be included as part of the curricula of Law Schools. Proper teaching of Competition Law would combine the foundations of Competition Law, doctrine, case solving and Case Law study. Students should to understand and apply the analytical model. Special emphasis should be given to EU Competition Law, namely the TFEU Articles 101 to 106. Damages Directive should also be part of the curriculum. Students must in the first place acquire and master the economic rationale as competition and the world of competition law are the cornerstone of sound and efficient market. The teaching of Competition Law in undergraduate programs in Law would contribute to fulfill the potential of the students who will deal with matters related to consumer protection, economic and commercial law issues both in private practice and as in-house lawyers for companies.

Keywords: higher education, competition law, legal education, law, market economy, industrial economics

Procedia PDF Downloads 128
1203 Mobile Technology as a Catalyst for Creative Teaching: A Developmental Based Research Study in a Large Public School in Mozambique

Authors: L. O'Sullivan, C. Murphy

Abstract:

This study examined the impact, if any, of mobile technology on the achievement of United Nations Sustainable Development Goal 4: Quality Education for All. It focused specifically on teachers and their practice, in a school with large class sizes and limited teaching resources. Teachers in third grade in a large public school in Mozambique were provided with an iPad connected to a projector, powered by a mobile solar-panel. Teachers also participated in ten days of professional development workshops over thirteen months. Teacher discussions, micro-teaching sessions and classes in the school were video-recorded, and data was triangulated using surveys and additional documents including class plans, digital artifacts created by teachers, workshop notes and researcher field notes. The catalyst for teachers’ creativity development was to use the photographic capabilities of the iPad to capture the local context and make lessons relevant to the lived experience of the students. In the transition stage, teachers worked with lesson plans and support from the professional development workshops to make small incremental changes to their practice, which scaffolded their growing competence in the creative use of the technology as a tool for teaching and developing new teaching resources. Over the full period of the study, these small changes in practice resulted in a cultural shift in how teachers approached all lessons, even those in which they were not using the technology. They developed into working as a community of practice. The digital lessons created were re-used and further developed by other teachers, providing a relevant and valuable bank of content in a context lacking in books and other teaching resources. This study demonstrated that mobile technology proved to be a successful catalyst for impacting creative teaching practice in this context, and supports the Quality Education for All Sustainable Development Goal.

Keywords: mobile technology, creative teaching, sub-Saharan Africa, quality education for all

Procedia PDF Downloads 100
1202 Pupil Size: A Measure of Identification Memory in Target Present Lineups

Authors: Camilla Elphick, Graham Hole, Samuel Hutton, Graham Pike

Abstract:

Pupil size has been found to change irrespective of luminosity, suggesting that it can be used to make inferences about cognitive processes, such as cognitive load. To see whether identifying a target requires a different cognitive load to rejecting distractors, the effect of viewing a target (compared with viewing distractors) on pupil size was investigated using a sequential video lineup procedure with two lineup sessions. Forty one participants were chosen randomly via the university. Pupil sizes were recorded when viewing pre target distractors and post target distractors and compared to pupil size when viewing the target. Overall, pupil size was significantly larger when viewing the target compared with viewing distractors. In the first session, pupil size changes were significantly different between participants who identified the target (Hits) and those who did not. Specifically, the pupil size of Hits reduced significantly after viewing the target (by 26%), suggesting that cognitive load reduced following identification. The pupil sizes of Misses (who made no identification) and False Alarms (who misidentified a distractor) did not reduce, suggesting that the cognitive load remained high in participants who failed to make the correct identification. In the second session, pupil sizes were smaller overall, suggesting that cognitive load was smaller in this session, and there was no significant difference between Hits, Misses and False Alarms. Furthermore, while the frequency of Hits increased, so did False Alarms. These two findings suggest that the benefits of including a second session remain uncertain, as the second session neither provided greater accuracy nor a reliable way to measure it. It is concluded that pupil size is a measure of face recognition strength in the first session of a target present lineup procedure. However, it is still not known whether cognitive load is an adequate explanation for this, or whether cognitive engagement might describe the effect more appropriately. If cognitive load and cognitive engagement can be teased apart with further investigation, this would have positive implications for understanding eyewitness identification. Nevertheless, this research has the potential to provide a tool for improving the reliability of lineup procedures.

Keywords: cognitive load, eyewitness identification, face recognition, pupillometry

Procedia PDF Downloads 383
1201 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach

Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic

Abstract:

The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.

Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning

Procedia PDF Downloads 172
1200 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses

Authors: Awais Ahmed, Javed Iqbal

Abstract:

Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.

Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system

Procedia PDF Downloads 358
1199 Multilayered Assembly of Gelatin on Nanofibrous Matrix for 3-D Cell Cultivation

Authors: Ji Un Shin, Wei Mao, Hyuk Sang Yoo

Abstract:

Electrospinning is a versatile tool for fabricating nano-structured polymeric materials. Gelatin hydrogels are considered to be a good material for cell cultivation because of high water swellability as well as good biocompatibility. Three-dimensional (3-D) cell cultivation is a desirable method of cell cultivation for preparing tissues and organs because cell-to-cell interactions or cell-to-matrix interactions can be much enhanced through this approach. For this reason, hydrogels were widely employed as tissue scaffolds because they can support cultivating cells and tissue in multi-dimensions. Major disadvantages of hydrogel-based cell cultivation include low mechanical properties, lack of topography, which should be enhanced for successful tissue engineering. Herein we surface-immobilized gelatin on the surface of nanofibrous matrix for 3-D cell cultivation in topographical cues added environments. Electrospun nanofibers were electrospun with injection of poly(caprolactone) through a single nozzle syringe. Electrospun meshes were then chopped up with a high speed grinder to fine powders. This was hydrolyzed in optimized concentration of sodium hydroxide solution from 1 to 6 hours and harvested by centrifugation. The freeze-dried powders were examined by scanning electron microscopy (SEM) for revealing the morphology and fibrilar shaped with a length of ca. 20um was observed. This was subsequently immersed in gelatin solution for surface-coating of gelatin, where the process repeated up to 10 times for obtaining desirable coating of gelatin on the surface. Gelatin-coated nanofibrils showed high waterswellability in comparison to the unmodified nanofibrils, and this enabled good dispersion properties of the modified nanofibrils in aqueous phase. The degree of water-swellability was increased as the coating numbers of gelatin increased, however, it did not any meaning result after 10 times of gelatin coating process. Thus, by adjusting the gelatin coating times, we could successfully control the degree of hydrophilicity and water-swellability of nanofibrils.

Keywords: nano, fiber, cell, tissue

Procedia PDF Downloads 153
1198 Technology Roadmapping in Defense Industry

Authors: Sevgi Özlem Bulu, Arif Furkan Mendi, Tolga Erol, İzzet Gökhan Özbilgin

Abstract:

The rapid progress of technology in today's competitive conditions has also accelerated companies' technology development activities. As a result, companies are paying more attention to R&D studies and are beginning to allocate a larger share to R&D projects. A more systematic, comprehensive, target-oriented implementation of R&D studies is crucial for the company to achieve successful results. As a consequence, Technology Roadmap (TRM) is gaining importance as a management tool. It has critical prospects for achieving medium and long term success as it contains decisions about past business, future plans, technological infrastructure. When studies on TRM are examined, projects to be placed on the roadmap are selected by many different methods. Generally preferred methods are based on multi-criteria decision making methods. Management of selected projects becomes an important point after the selection phase of the projects. At this stage, TRM are used. TRM can be created in many different ways so that each institution can prepare its own Technology Roadmap according to their strategic plan. Depending on the intended use, there can be TRM with different layers at different sizes. In the evaluation phase of the R&D projects and in the creation of the TRM, HAVELSAN, Turkey's largest defense company in the software field, carries out this process with great care and diligence. At the beginning, suggested R&D projects are evaluated by the Technology Management Board (TMB) of HAVELSAN in accordance with the company's resources, objectives, and targets. These projects are presented to the TMB periodically for evaluation within the framework of certain criteria by board members. After the necessary steps have been passed, the approved projects are added to the time-based TRM, which is composed of four layers as market, product, project and technology. The use of a four-layered roadmap provides a clearer understanding and visualization of company strategy and objectives. This study demonstrates the benefits of using TRM, four-layered Technology Roadmapping and the possibilities for the institutions in the defense industry.

Keywords: technology roadmap, research and development project, project selection, research development in defense industry

Procedia PDF Downloads 164
1197 Strengths and Challenges to Embrace Attention Deficit/Hyperactivity Disorder (ADHD) in Employment: A Systematic Review

Authors: Adèle Hotte-Meunier, Lisa Sarraf, Alan Bougeard, Félicia Bernier, Chloé Voyer, Jiaxuan Deng, Stéphanie El Asmar, Alina Stamate, Marc Corbière, Patrizia Villotti, Geneviève Sauvé

Abstract:

Background: Attention-Deficit/Hyperactivity Disorder (ADHD) is characterized by a persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with psychosocial, educational and occupational functioning. Although often conceptualized as a developmental disorder of childhood, 65% of children with ADHD continue to meet full or partial diagnostic criteria for ADHD in adulthood and an estimated 4% of the workforce has a diagnosis of ADHD. Methods: A systematic review was conducted to understand the experiences of people living with ADHD in the workplace. Articles reporting employment outcomes for people living with were identified by a search in eight databases on four separate occasions from June 27, 2022, to June 21, 2023. A risk of bias assessment for each study was performed using the Mixed Methods Appraisal Tool (MMAT). Results: A total of 79 studies were included in this systematic review (nADHD: 68, 216). Results were synthesized into three broad overarching categories: challenges, strengths and adaptations at work. Further, nine themes were included: ADHD symptoms at work, workplace performance, job satisfaction, interpersonal relationships at work, maladaptive work thoughts and behaviors, personal strengths, embracing ADHD, person-environment fit and accommodations and support. Sex differences were highlighted as a tenth subtheme. ADHD confers both strengths and limitations related to employment. Discussion: Workers with ADHD can not only adapt but thrive in employment with the right person-environment fit, accommodations and support. Many challenges related to ADHD can be managed or remodeled as assets in a workplace environment that fosters acceptance, flexible working practices and openness to neurodiversity.

Keywords: neurodivergence, occupation, workplace, person-environment fit

Procedia PDF Downloads 66
1196 The Effects of Cardiovascular Risk on Age-Related Cognitive Decline in Healthy Older Adults

Authors: A. Badran, M. Hollocks, H. Markus

Abstract:

Background: Common risk factors for cardiovascular disease are associated with age-related cognitive decline. There has been much interest in treating modifiable cardiovascular risk factors in the hope of reducing cognitive decline. However, there is currently no validated neuropsychological test to assess the subclinical cognitive effects of vascular risk. The Brief Memory and Executive Test (BMET) is a clinical screening tool, which was originally designed to be sensitive and specific to Vascular Cognitive Impairment (VCI), an impairment characterised by decline in frontally-mediated cognitive functions (e.g. Executive Function and Processing Speed). Objective: To cross-sectionally assess the validity of the BMET as a measure of the subclinical effects of vascular risk on cognition, in an otherwise healthy elderly cohort. Methods: Data from 346 participants (57 ± 10 years) without major neurological or psychiatric disorders were included in this study, gathered as part of a previous multicentre validation study for the BMET. Framingham Vascular Age was used as a surrogate measure of vascular risk, incorporating several established risk factors. Principal Components Analysis of the subtests was used to produce common constructs: an index for Memory and another for Executive Function/Processing Speed. Univariate General Linear models were used to relate Vascular Age to performance on Executive Function/Processing Speed and Memory subtests of the BMET, adjusting for Age, Premorbid Intelligence and Ethnicity. Results: Adverse vascular risk was associated with poorer performance on both the Memory and Executive Function/Processing Speed indices, adjusted for Age, Premorbid Intelligence and Ethnicity (p=0.011 and p<0.001, respectively). Conclusions: Performance on the BMET reflects the subclinical effects of vascular risk on cognition, in age-related cognitive decline. Vascular risk is associated with decline in both Executive Function/Processing Speed and Memory groups of subtests. Future studies are needed to explore whether treating vascular risk factors can effectively reduce age-related cognitive decline.

Keywords: age-related cognitive decline, vascular cognitive impairment, subclinical cerebrovascular disease, cognitive aging

Procedia PDF Downloads 449
1195 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder

Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo

Abstract:

Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.

Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion

Procedia PDF Downloads 103
1194 Rapid Formation of Ortho-Boronoimines and Derivatives for Reversible and Dynamic Bioconjugation Under Physiological Conditions

Authors: Nicholas C. Rose, Christopher D. Spicer

Abstract:

The regeneration of damaged or diseased tissues would provide an invaluable therapeutic tool in biological research and medicine. Cells must be provided with a number of different biochemical signals in order to form mature tissue through complex signaling networks that are difficult to recreate in synthetic materials. The ability to attach and detach bioactive proteins from material in an iterative and dynamic manner would therefore present a powerful way to mimic natural biochemical signaling cascades for tissue growth. We propose to reversibly attach these bioactive proteins using ortho-boronoimine (oBI) linkages and related derivatives formed by the reaction of an ortho-boronobenzaldehyde with a nucleophilic amine derivative. To enable the use of oBIs for biomaterial modification, we have studied binding and cleavage processes with precise detail in the context of small molecule models. A panel of oBI complexes has been synthesized and screened using a novel Förster resonance energy transfer (FRET) assay, using a cyanine dye FRET pair (Cy3 and Cy5), to identify the most reactive boron-aldehyde/amine nucleophile pairs. Upon conjugation of the dyes, FRET occurs under Cy3 excitation and the resultant ratio of Cy3:Cy5 emission directly correlates to conversion. Reaction kinetics and equilibria can be accurately quantified for reactive pairs, with dissociation constants of oBI derivatives in water (KD) found to span 9-orders of magnitude (10⁻²-10⁻¹¹ M). These studies have provided us with a better understanding of oBI linkages that we hope to exploit to reversibly attach bioconjugates to materials. The long-term aim of the project is to develop a modular biomaterial platform that can be used to help combat chronic diseases such as osteoarthritis, heart disease, and chronic wounds by providing cells with potent biological stimuli for tissue engineering.

Keywords: dynamic, bioconjugation, bornoimine, rapid, physiological

Procedia PDF Downloads 80
1193 The Moderating Role of Perceived University Environment in the Formation of Entrepreneurial Intention among Creative Industries Students

Authors: Patrick Ebong Ebewo

Abstract:

The trend of high unemployment levels globally is a growing concern, which suggests that university students especially those studying the creative industries are most likely to face unemployment upon completion of their studies. Therefore the effort of university in fostering entrepreneurial knowledge is equally important to the development of student’s soft skill. The purpose of this paper is to assess the significance of perceived university environment and perceived educational support that influencing University students’ intention in starting their own business in the future. Thus, attempting to answer the question 'How does perceived university environment affect students’ attitude towards entrepreneurship as a career option, perceived entrepreneurial abilities, subjective norm and entrepreneurial intentions?' The study is based on the Theory of Planned Behaviour model adapted from previous studies and empirically tested on graduates at the Tshwane University of Technology. A sample of 150 graduates from the Arts and Design graduates took part in the study and data collected were analysed using structural equation modelling (SEM). Our findings seem to suggest the indirect impact of perceived university environment on entrepreneurial intention through perceived environment support and perceived entrepreneurial abilities. Thus, any increase in perceived university environment might influence students to become entrepreneurs. Based on these results, it is recommended that: (a) Tshwane University of Technology and other universities of technology should establish an ‘Entrepreneurship Internship Programme’ as a tool for stimulated work integrated learning. Post-graduation intervention could be implemented by the development of a ‘Graduate Entrepreneurship Program’ which should be embedded in the Bachelor of Technology (B-Tech now Advance Diploma) and Postgraduate courses; (b) Policymakers should consider the development of a coherent national policy framework that addresses entrepreneurship for the Arts/creative industries sector. This would create the enabling environment for the evolution of Higher Education Institutions from merely Teaching, Learning & Research to becoming drivers for creative entrepreneurship.

Keywords: business venture, entrepreneurship education, entrepreneurial intent, university environment

Procedia PDF Downloads 318
1192 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments

Authors: Ana Londral, Burcu Demiray, Marcus Cheetham

Abstract:

Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.

Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation

Procedia PDF Downloads 268
1191 TiO₂ Deactivation Process during Photocatalytic Ethanol Degradation in the Gas Phase

Authors: W. El-Alami, J. Araña, O. González Díaz, J. M. Doña Rodríguez

Abstract:

The efficiency of the semiconductor TiO₂ needs to be improved to be an effective tool for pollutant removal. To improve the efficiency of this semiconductor, it is necessary to deepen the knowledge of the processes that take place on its surface. In this sense, the deactivation of the catalyst is one of the aspects considered relevant. In order to study this point, the processes of deactivation of TiO₂ during the gas phase degradation of ethanol have been studied. For this, catalysts with only the anatase phase (SA and PC100) and catalysts with anatase and rutile phases (P25 and P90) have been selected. In order to force the deactivation processes, different cycles have been performed, adding ethanol gas but avoiding the degradation of acetates to determine their effect on the process. The surface concentration of fluorine on the catalysts was semi-quantitatively determined by EDAX analysis. The photocatalytic experiments were done with four commercial catalysts (P25, SA, P90, and PC100) and the two fluoride catalysts indicated above. The interaction and photocatalytic degradation of ethanol were followed by Fourier transform infrared spectroscopy (FTIR). EDAX analysis has revealed the presence of sodium on the surface of fluorinated catalysts. In FTIR studies, it has been observed that the acetates adsorbed on the anatase phase in P25 and P90 give rise to electron transfer to surface traps that modify the electronic states of the semiconductor. These deactivation studies have also been carried out with fluorinated P25 and SA catalysts (F-P25 and F-SA) which have observed similar electron transfers but in the opposite direction during illumination. In these materials, it has been observed that the electrons present in the surface traps, as a consequence of the interaction Ti-F, react with the holes, causing a change in the electronic states of the semiconductor. In this way, deactivated states of these materials have been detected by different electron transfer routes. It has been identified that acetates produced from the degradation of ethanol in P25 and P90 are probably hydrated on the surface of the rutile phase. In the catalysts with only the anatase phase (SA and PC100), the deactivation is immediate if the acetates are not removed before adsorbing ethanol again. In F-P25 and F-SA has been observed that the acetates formed react with the sodium ions present on the surface and not with the Ti atoms because they are interacting with the fluorine.

Keywords: photocatalytic degradation, ethanol, TiO₂, deactivation process, F-P25

Procedia PDF Downloads 58
1190 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia

Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay

Abstract:

Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.

Keywords: AquaCrop model, calibration, validation, simulation

Procedia PDF Downloads 41
1189 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 51
1188 Exploring Smartphone Applications for Enhancing Second Language Vocabulary Learning

Authors: Abdulmajeed Almansour

Abstract:

Learning a foreign language with the assistant of technological tools has become an interest of learners and educators. Increased use of smartphones among undergraduate students has made them popular for not only social communication but also for entertainment and educational purposes. Smartphones have provided remarkable advantages in language learning process. Learning vocabulary is an important part of learning a language. The use of smartphone applications for English vocabulary learning provides an opportunity for learners to improve vocabulary knowledge beyond the classroom wall anytime anywhere. Recently, various smartphone applications were created specifically for vocabulary learning. This paper aims to explore the use of smartphone application Memrise designed for vocabulary learning to enhance academic vocabulary among undergraduate students. It examines whether the use of a Memrise smartphone application designed course enhances the academic vocabulary learning among ESL learners. The research paradigm used in this paper followed a mixed research model combining quantitative and qualitative research. The study included two hundred undergraduate students randomly assigned to the experimental and controlled group during the first academic year at the Faculty of English Language, Imam University. The research instruments included an attitudinal questionnaire and an English vocabulary pre-test administered to students at the beginning of the semester whereas post-test and semi-structured interviews administered at the end of the semester. The findings of the attitudinal questionnaire revealed a positive attitude towards using smartphones in learning vocabulary. The post-test scores showed a significant difference in the experimental group performance. The results from the semi-structure interviews showed that there were positive attitudes towards Memrise smartphone application. The students found the application enjoyable, convenient and efficient learning tool. From the study, the use of the Memrise application is seen to have long-term and motivational benefits to students. For this reason, there is a need for further research to identify the long-term optimal effects of learning a language using smartphone applications.

Keywords: second language vocabulary learning, academic vocabulary, mobile learning technologies, smartphone applications

Procedia PDF Downloads 143
1187 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 135
1186 Evaluation and Risk Assessment of Heavy Metals Pollution Using Edible Crabs, Based on Food Intended for Human Consumption

Authors: Nayab Kanwal, Noor Us Saher

Abstract:

The management and utilization of food resources is becoming a big issue due to rapid urbanization, wastage and non-sustainable use of food, especially in developing countries. Therefore, the use of seafood as alternative sources is strongly promoted worldwide. Marine pollution strongly affects marine organisms, which ultimately decreases their export quality. The monitoring of contamination in marine organisms is a good indicator of the environmental quality as well as seafood quality. Monitoring the accumulation of chemical elements within various tissues of organisms has become a useful tool to survey current or chronic levels of heavy metal exposure within an environment. In this perspective, this study was carried out to compare the previous and current levels (Year 2012 and 2014) of heavy metals (Cd, Pb, Cr, Cu and Zn) in crabs marketed in Karachi and to estimate the toxicological risk associated with their intake. The accumulation of metals in marine organisms, both essential (Cu and Zn) and toxic (Pb, Cd and Cr), natural and anthropogenic, is an actual food safety issue. Significant (p>0.05) variations in metal concentrations were found in all crab species between the two years, with most of the metals showing high accumulation in 2012. For toxicological risk assessment, EWI (Estimated weekly intake), Target Hazard quotient (THQ) and cancer risk (CR) were also assessed and high EWI, Non- cancer risk (THQ < 1) showed that there is no serious threat associated with the consumption of shellfish species on Karachi coast. The Cancer risk showed the highest risk from Cd and Pb pollution if consumed in excess. We summarize key environmental health research on health effects associated with exposure to contaminated seafood. It could be concluded that considering the Pakistan coast, these edible species may be sensitive and vulnerable to the adverse effects of environmental contaminants; more attention should be paid to the Pb and Cd metal bioaccumulation and to toxicological risks to seafood and consumers.

Keywords: cancer risk, edible crabs, heavy metals pollution, risk assessment

Procedia PDF Downloads 356
1185 Robson System Analysis in Kyiv Perinatal Centre

Authors: Victoria Bila, Iryna Ventskivska, Oleksandra Zahorodnia

Abstract:

The goal of the study: To study the distribution of patients of the Kiyv Perinatal Center according to the Robson system and compare it with world data. Materials and methods: a comparison of the distribution of patients of Kiyv Perinatal center according to the Robson system for 2 periods - the first quarter of 2019 and 2020. For each group, 3 indicators were analyzed - the share of this group in the overall structure of patients of the Perinatal Center for the reporting period, the frequency of abdominal delivery in this group, as well as the contribution of this group to the total number of abdominal delivery. Obtained data were compared with those of the WHO in the guidelines for the implementation of the Robson system in 2017. Results and its discussion: The distribution of patients of the Perinatal Center into groups in the Robson classification is not much different from that recommended by the author. So, among all women, patients of group 1 dominate; this indicator does not change in dynamics. A slight increase in the share of group 2 (6.7% in 2019 and 9.3% - 2020) was due to an increase in the number of labor induction. At the same time, the number of patients of groups 1 and 2 in the Perinatal Center is greater than in the world population, which is determined by the hospitalization of primiparous women with reproductive losses in the past. The Perinatal Center is distinguished from the world population and the proportion of women of group 5 - it was 5.4%, in the world - 7.6%. The frequency of caesarean section in the Perinatal Center is within limits typical for most countries (20.5-20.8%). Moreover, the dominant groups in the structure of caesarean sections are group 5 (21-23.3%) and group 2 (21.9-22.9%), which are the reserve for reducing the number of abdominal delivery. In group 2, certain results have already been achieved in this matter - the frequency of cesarean section in 2019 here amounted to 67.8%, in the first quarter of 2020 - 51.6%. This happened due to a change in the leading method of induction of labor. Thus, the Robson system is a convenient and affordable tool for assessing the structure of caesarean sections. The analysis showed that, in general, the structure of caesarean sections in the Perinatal Center is close to world data, and the identified deviations have explanations related to the specialization of the Center.

Keywords: cesarian section, Robson system, Kyiv Perinatal Center, labor induction

Procedia PDF Downloads 110
1184 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 98