Search results for: computer video game
396 A Case Study of An Artist Diagnosed with Schizophrenia-Using the Graphic Rorschach (Digital version) “GRD”
Authors: Maiko Kiyohara, Toshiki Ito
Abstract:
In this study, we used a psychotherapy process for patient with dissociative disorder and the graphic Rorschach (Digital version) (GRD). A dissociative disorder is a type of dissociation characterized by multiple alternating personalities (also called alternate identity or another identity). "dissociation" is a state in which consciousness, memory, thinking, emotion, perception, behavior, body image, and so on are divided and experienced. Dissociation symptoms, such as lack of memory, are seen, and the repetition of blanks in daily events causes serious problems in life. Although the pathological mechanism of dissociation has not yet been fully elucidated, it is said that it is caused by childhood abuse or shocking trauma. In case of Japan, no reliable data has been reported on the number of patients and prevalence of dissociative disorders, no drug is compatible with dissociation symptoms, and no clear treatment has been established. GRD is a method that the author revised in 2017 to a Graphic Rorschach, which is a special technique for subjects to draw language responses when enforce Rorschach. GRD reduces the burden on both the subject and the examiner, reduces the complexity of organizing data, improves the simplicity of organizing data, and improves the accuracy of interpretation by introducing a tablet computer during the drawing reaction. We are conducting research for the purpose. The patient in this case is a woman in her 50s, and has multiple personalities since childhood. At present, there are about 10 personalities whose main personality is just grasped. The patients is raising her junior high school sons as single parent, but personal changes often occur at home, which makes the home environment inferior and economically oppressive, and has severely hindered daily life. In psychotherapy, while a personality different from the main personality has appeared, I have also conducted psychotherapy with her son. In this case, the psychotherapy process and the GRD were performed to understand the personality characteristics, and the possibility of therapeutic significance to personality integration is reported.Keywords: GRD, dissociative disorder, a case study of psychotherapy process, dissociation
Procedia PDF Downloads 117395 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder
Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo
Abstract:
Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion
Procedia PDF Downloads 121394 Comparative Parametric Analysis on the Dynamic Response of Fibre Composite Beams with Debonding
Authors: Indunil Jayatilake, Warna Karunasena
Abstract:
Fiber Reinforced Polymer (FRP) composites enjoy an array of applications ranging from aerospace, marine and military to automobile, recreational and civil industry due to their outstanding properties. A structural glass fiber reinforced polymer (GFRP) composite sandwich panel made from E-glass fiber skin and a modified phenolic core has been manufactured in Australia for civil engineering applications. One of the major mechanisms of damage in FRP composites is skin-core debonding. The presence of debonding is of great concern not only because it severely affects the strength but also it modifies the dynamic characteristics of the structure, including natural frequency and vibration modes. This paper deals with the investigation of the dynamic characteristics of a GFRP beam with single and multiple debonding by finite element based numerical simulations and analyses using the STRAND7 finite element (FE) software package. Three-dimensional computer models have been developed and numerical simulations were done to assess the dynamic behavior. The FE model developed has been validated with published experimental, analytical and numerical results for fully bonded as well as debonded beams. A comparative analysis is carried out based on a comprehensive parametric investigation. It is observed that the reduction in natural frequency is more affected by single debonding than the equally sized multiple debonding regions located symmetrically to the single debonding position. Thus it is revealed that a large single debonding area leads to more damage in terms of natural frequency reduction than isolated small debonding zones of equivalent area, appearing in the GFRP beam. Furthermore, the extents of natural frequency shifts seem mode-dependent and do not seem to have a monotonous trend of increasing with the mode numbers.Keywords: debonding, dynamic response, finite element modelling, novel FRP beams
Procedia PDF Downloads 117393 Monitoring of Water Quality Using Wireless Sensor Network: Case Study of Benue State of Nigeria
Authors: Desmond Okorie, Emmanuel Prince
Abstract:
Availability of portable water has been a global challenge especially to the developing continents/nations such as Africa/Nigeria. The World Health Organization WHO has produced the guideline for drinking water quality GDWQ which aims at ensuring water safety from source to consumer. Portable water parameters test include physical (colour, odour, temperature, turbidity), chemical (PH, dissolved solids) biological (algae, plytoplankton). This paper discusses the use of wireless sensor networks to monitor water quality using efficient and effective sensors that have the ability to sense, process and transmit sensed data. The integration of wireless sensor network to a portable sensing device offers the feasibility of sensing distribution capability, on site data measurements and remote sensing abilities. The current water quality tests that are performed in government water quality institutions in Benue State Nigeria are carried out in problematic locations that require taking manual water samples to the institution laboratory for examination, to automate the entire process based on wireless sensor network, a system was designed. The system consists of sensor node containing one PH sensor, one temperature sensor, a microcontroller, a zigbee radio and a base station composed by a zigbee radio and a PC. Due to the advancement of wireless sensor network technology, unexpected contamination events in water environments can be observed continuously. local area network (LAN) wireless local area network (WLAN) and internet web-based also commonly used as a gateway unit for data communication via local base computer using standard global system for mobile communication (GSM). The improvement made on this development show a water quality monitoring system and prospect for more robust and reliable system in the future.Keywords: local area network, Ph measurement, wireless sensor network, zigbee
Procedia PDF Downloads 172392 Desulfurization of Crude Oil Using Bacteria
Authors: Namratha Pai, K. Vasantharaj, K. Haribabu
Abstract:
Our Team is developing an innovative cost effective biological technique to desulfurize crude oil. ’Sulphur’ is found to be present in crude oil samples from .05% - 13.95% and its elimination by industrial methods is expensive currently. Materials required :- Alicyclobacillus acidoterrestrius, potato dextrose agar, oxygen, Pyragallol and inert gas(nitrogen). Method adapted and proposed:- 1) Growth of bacteria studied, energy needs. 2) Compatibility with crude-oil. 3) Reaction rate of bacteria studied and optimized. 4) Reaction development by computer simulation. 5) Simulated work tested by building the reactor. The method being developed requires the use of bacteria Alicyclobacillus acidoterrestrius - an acidothermophilic heterotrophic, soil dwelling aerobic, Sulfur bacteria. The bacteria are fed to crude oil in a unique manner. Its coated onto potato dextrose agar beads, cultured for 24 hours (growth time coincides with time when it begins reacting) and fed into the reactor. The beads are to be replenished with O2 by passing them through a jacket around the reactor which has O2 supply. The O2 can’t be supplied directly as crude oil is inflammable, hence the process. Beads are made to move around based on the concept of fluidized bed reactor. By controlling the velocity of inert gas pumped , the beads are made to settle down when exhausted of O2. It is recycled through the jacket where O2 is re-fed and beads which were inside the ring substitute the exhausted ones. Crude-oil is maintained between 1 atm-270 M Pa pressure and 45°C treated with tartaric acid (Ph reason for bacteria growth) for optimum output. Bacteria being of oxidising type react with Sulphur in crude-oil and liberate out SO4^2- and no gas. SO4^2- is absorbed into H2O. NaOH is fed once reaction is complete and beads separated. Crude-oil is thus separated of SO4^2-, thereby Sulphur, tartaric acid and other acids which are separated out. Bio-corrosion is taken care of by internal wall painting (phenolepoxy paints). Earlier methods used included use of Pseudomonas and Rhodococcus species. They were found to be inefficient, time and energy consuming and reduce the fuel value as they fed on skeleton.Keywords: alicyclobacillus acidoterrestrius, potato dextrose agar, fluidized bed reactor principle, reaction time for bacteria, compatibility with crude oil
Procedia PDF Downloads 319391 Examining the Discursive Hegemony of British Energy Transition Narratives
Authors: Antonia Syn
Abstract:
Politicians’ outlooks on the nature of energy futures and an ‘Energy Transition’ have evolved considerably alongside a steady movement towards renewable energies, buttressed by lower technology costs, rising environmental concerns, and favourable national policy decisions. This paper seeks to examine the degree to which an energy transition has become an incontrovertible ‘status quo’ in parliament, and whether politicians share similar understandings of energy futures or narrate different stories under the same label. Parliamentarians construct different understandings of the same reality, in the form of co-existing and competing discourses, shaping and restricting how policy problems and solutions are understood and tackled. Approaching energy policymaking from a parliamentary discourse perspective draws directly from actors’ concrete statements, offering an alternative to policy literature debates revolving around inductive policy theories. This paper uses computer-assisted discourse analysis to describe fundamental discursive changes in British parliamentary debates around energy futures. By applying correspondence cluster analyses to Hansard transcripts from 1986 to 2010, we empirically measure the policy positions of Labour and Conservative politicians’ parliamentary speeches during legislatively salient moments preceding significant energy transition-related policy decisions. Results show the concept of a technology-based, market-driven transition towards fossil-free and nuclear-free renewables integration converged across Labour and the Conservatives within three decades. Specific storylines underwent significant change, particularly in relation to international outlooks, environmental framings, treatments of risk, and increases in rhetoric. This study contributes to a better understanding of the role politics plays in the energy transition, highlighting how politicians’ values and beliefs inevitably determine and delimit creative policymaking.Keywords: quantitative discourse analysis, energy transition, renewable energy, British parliament, public policy
Procedia PDF Downloads 153390 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection
Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra
Abstract:
In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging
Procedia PDF Downloads 87389 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 157388 NanoSat MO Framework: Simulating a Constellation of Satellites with Docker Containers
Authors: César Coelho, Nikolai Wiegand
Abstract:
The advancement of nanosatellite technology has opened new avenues for cost-effective and faster space missions. The NanoSat MO Framework (NMF) from the European Space Agency (ESA) provides a modular and simpler approach to the development of flight software and operations of small satellites. This paper presents a methodology using the NMF together with Docker for simulating constellations of satellites. By leveraging Docker containers, the software environment of individual satellites can be easily replicated within a simulated constellation. This containerized approach allows for rapid deployment, isolation, and management of satellite instances, facilitating comprehensive testing and development in a controlled setting. By integrating the NMF lightweight simulator in the container, a comprehensive simulation environment was achieved. A significant advantage of using Docker containers is their inherent scalability, enabling the simulation of hundreds or even thousands of satellites with minimal overhead. Docker's lightweight nature ensures efficient resource utilization, allowing for deployment on a single host or across a cluster of hosts. This capability is crucial for large-scale simulations, such as in the case of mega-constellations, where multiple traditional virtual machines would be impractical due to their higher resource demands. This ability for easy horizontal scaling based on the number of simulated satellites provides tremendous flexibility to different mission scenarios. Our results demonstrate that leveraging Docker containers with the NanoSat MO Framework provides a highly efficient and scalable solution for simulating satellite constellations, offering not only significant benefits in terms of resource utilization and operational flexibility but also enabling testing and validation of ground software for constellations. The findings underscore the importance of taking advantage of already existing technologies in computer science to create new solutions for future satellite constellations in space.Keywords: containerization, docker containers, NanoSat MO framework, satellite constellation simulation, scalability, small satellites
Procedia PDF Downloads 50387 Summer STEM Institute in Environmental Science and Data Sciencefor Middle and High School Students at Pace University
Authors: Lauren B. Birney
Abstract:
Summer STEM Institute for Middle and High School Students at Pace University The STEM Collaboratory NYC® Summer Fellows Institute takes place on Pace University’s New York City campus during July and provides the following key features for all participants: (i) individual meetings with Pace faculty to discuss and refine future educational goals; (ii) mentorship, guidance, and new friendships with program leaders; and (iii) guest lectures from professionals in STEM disciplines and businesses. The Summer STEM Institute allows middle school and high school students to work in teams to conceptualize, develop, and build native mobile applications that teach and reinforce skills in the sciences and mathematics. These workshops enhance students’STEM problem solving techniques and teach advanced methods of computer science and engineering. Topics include: big data and analytics at the Big Data lab at Seidenberg, Data Science focused on social and environmental advancement and betterment; Natural Disasters and their Societal Influences; Algal Blooms and Environmental Impacts; Green CitiesNYC; STEM jobs and growth opportunities for the future; renew able energy and sustainable infrastructure; and climate and the economy. In order to better align the existing Summer STEM, Institute with the CCERS model and expand the overall network, Pace is actively recruiting new content area specialists from STEM industries and private sector enterprises to participate in an enhanced summer institute in order to1) nurture student progress and connect summer learning to school year curriculum, 2) increase peer-to-peer collaboration amongst STEM professionals and private sector technologists, and 3) develop long term funding and sponsorship opportunities for corporate sector partners to support CCERS schools and programs directly.Keywords: environmental restoration science, citizen science, data science, STEM
Procedia PDF Downloads 85386 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.Keywords: mathematical sciences, data analytics, advances, unveiling
Procedia PDF Downloads 93385 Development of a Triangular Evaluation Protocol in a Multidisciplinary Design Process of an Ergometric Step
Authors: M. B. Ricardo De Oliveira, A. Borghi-Silva, E. Paravizo, F. Lizarelli, L. Di Thomazzo, D. Braatz
Abstract:
Prototypes are a critical feature in the product development process, as they help the project team visualize early concept flaws, communicate ideas and introduce an initial product testing. Involving stakeholders, such as consumers and users, in prototype tests allows the gathering of valuable feedback, contributing for a better product and making the design process more participatory. Even though recent studies have shown that user evaluation of prototypes is valuable, few articles provide a method or protocol on how designers should conduct it. This multidisciplinary study (involving the areas of physiotherapy, engineering and computer science) aims to develop an evaluation protocol, using an ergometric step prototype as the product prototype to be assessed. The protocol consisted of performing two tests (the 2 Minute Step Test and the Portability Test) to allow users (patients) and consumers (physiotherapists) to have an experience with the prototype. Furthermore, the protocol contained four Likert-Scale questionnaires (one for users and three for consumers), that inquired participants about how they perceived the design characteristics of the product (performance, safety, materials, maintenance, portability, usability and ergonomics), in their use of the prototype. Additionally, the protocol indicated the need to conduct interviews with the product designers, in order to link their feedback to the ones from the consumers and users. Both tests and interviews were recorded for further analysis. The participation criteria for the study was gender and age for patients, gender and experience with 2 Minute Step Test for physiotherapists and involvement level in the product development project for designers. The questionnaire's reliability was validated using Cronbach's Alpha and the quantitative data of the questionnaires were analyzed using non-parametric hypothesis tests with a significance level of 0.05 (p <0.05) and descriptive statistics. As a result, this study provides a concise evaluation protocol which can assist designers in their development process, collecting quantitative feedback from consumer and users, and qualitative feedback from designers.Keywords: Product Design, Product Evaluation, Prototypes, Step
Procedia PDF Downloads 118384 Interactive Glare Visualization Model for an Architectural Space
Authors: Florina Dutt, Subhajit Das, Matthew Swartz
Abstract:
Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis
Procedia PDF Downloads 350383 Microstructure Dependent Fatigue Crack Growth in Aluminum Alloy
Authors: M. S. Nandana, K. Udaya Bhat, C. M. Manjunatha
Abstract:
In this study aluminum alloy 7010 was subjected to three different ageing treatments i.e., peak ageing (T6), over-ageing (T7451) and retrogression and re ageing (RRA) to study the influence of precipitate microstructure on the fatigue crack growth rate behavior. The microstructural modification was studied by using transmission electron microscope (TEM) to examine the change in the size and morphology of precipitates in the matrix and on the grain boundaries. The standard compact tension (CT) specimens were fabricated and tested under constant amplitude fatigue crack growth tests to evaluate the influence of heat treatment on the fatigue crack growth rate properties. The tests were performed in a computer-controlled servo-hydraulic test machine applying a load ratio, R = 0.1 at a loading frequency of 10 Hz as per ASTM E647. The fatigue crack growth was measured by adopting compliance technique using a CMOD gauge attached to the CT specimen. The average size of the matrix precipitates were found to be of 16-20 nm in T7451, 5-6 nm in RRA and 2-3 nm in T6 conditions respectively. The grain boundary precipitate which was continuous in T6, was disintegrated in RRA and T7451 condition. The PFZ width was lower in RRA compared to T7451 condition. The crack growth rate was higher in T7451 and lowest in RRA treated alloy. The RRA treated alloy also exhibits an increase in threshold stress intensity factor range (∆Kₜₕ). The ∆Kₜₕ measured was 11.1, 10.3 and 5.7 MPam¹/² in RRA, T6 and T7451 alloys respectively. The fatigue crack growth rate in RRA treated alloy was nearly 2-3 times lower than that in T6 and was one order lower than that observed in T7451 condition. The surface roughness of RRA treated alloy was more pronounced when compared to the other conditions. The reduction in fatigue crack growth rate in RRA alloy was majorly due to the increase in roughness and partially due to increase in spacing between the matrix precipitates. The reduction in crack growth rate and increase in threshold stress intensity range is expected to benefit the damage tolerant capability of aircraft structural components under service loads.Keywords: damage tolerance, fatigue, heat treatment, PFZ, RRA
Procedia PDF Downloads 154382 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization
Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir
Abstract:
Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink
Procedia PDF Downloads 110381 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation
Authors: Zeynep Yazicioglu
Abstract:
Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project
Procedia PDF Downloads 102380 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory
Authors: Kiana Zeighami, Morteza Ozlati Moghadam
Abstract:
Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping
Procedia PDF Downloads 208379 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization
Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu
Abstract:
Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.Keywords: flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up
Procedia PDF Downloads 320378 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception
Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut
Abstract:
What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.Keywords: time perception, entropy, temporal illusions, duration perception
Procedia PDF Downloads 172377 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 313376 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction
Authors: C. S. Subhashini, H. L. Premaratne
Abstract:
Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.Keywords: landslides, influencing factors, neural network model, hidden markov model
Procedia PDF Downloads 384375 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.Keywords: canny pruning, hand recognition, machine learning, skin tracking
Procedia PDF Downloads 185374 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 75373 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste
Authors: Timilehin Martins Oyinloye, Won Byong Yoon
Abstract:
Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste
Procedia PDF Downloads 68372 Applying Computer Simulation Methods to a Molecular Understanding of Flaviviruses Proteins towards Differential Serological Diagnostics and Therapeutic Intervention
Authors: Sergio Alejandro Cuevas, Catherine Etchebest, Fernando Luis Barroso Da Silva
Abstract:
The flavivirus genus has several organisms responsible for generating various diseases in humans. Special in Brazil, Zika (ZIKV), Dengue (DENV) and Yellow Fever (YFV) viruses have raised great health concerns due to the high number of cases affecting the area during the last years. Diagnostic is still a difficult issue since the clinical symptoms are highly similar. The understanding of their common structural/dynamical and biomolecular interactions features and differences might suggest alternative strategies towards differential serological diagnostics and therapeutic intervention. Due to their immunogenicity, the primary focus of this study was on the ZIKV, DENV and YFV non-structural proteins 1 (NS1) protein. By means of computational studies, we calculated the main physical chemical properties of this protein from different strains that are directly responsible for the biomolecular interactions and, therefore, can be related to the differential infectivity of the strains. We also mapped the electrostatic differences at both the sequence and structural levels for the strains from Uganda to Brazil that could suggest possible molecular mechanisms for the increase of the virulence of ZIKV. It is interesting to note that despite the small changes in the protein sequence due to the high sequence identity among the studied strains, the electrostatic properties are strongly impacted by the pH which also impact on their biomolecular interactions with partners and, consequently, the molecular viral biology. African and Asian strains are distinguishable. Exploring the interfaces used by NS1 to self-associate in different oligomeric states, and to interact with membranes and the antibody, we could map the strategy used by the ZIKV during its evolutionary process. This indicates possible molecular mechanisms that can explain the different immunological response. By the comparison with the known antibody structure available for the West Nile virus, we demonstrated that the antibody would have difficulties to neutralize the NS1 from the Brazilian strain. The present study also opens up perspectives to computationally design high specificity antibodies.Keywords: zika, biomolecular interactions, electrostatic interactions, molecular mechanisms
Procedia PDF Downloads 132371 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques
Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan
Abstract:
A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle
Procedia PDF Downloads 319370 Impact of Information and Communication Technology on Academic Performance of Senior Secondary Schools Students in Gwagwalada Area Council of Federal Capital Territory, Abuja
Authors: Suleiman Garba, Haruna Ishaku
Abstract:
Information and communication technology (ICT) includes any communication device encompassing: radio, television, cellular phones, computer, satellite systems and so on, as well as the various services and applications associated with them. The significance of ICT cannot be over-emphasized in education. The teaching and learning processes have integrated with the application of ICTs for effectiveness and enhancement of academic performance among the students. Today, as the educational sector is faced with series of changes and reforms, it was noted that the problem of information technology illiteracy was a serious one among the schools’ teachers in the country as it cuts across primary, secondary schools and tertiary institutions. This study investigated the impact of ICT on the academic performance of senior secondary schools students in Gwagwalada Area Council of Federal Capital Territory (FCT), Abuja. A sample of 120 SSS III students was involved in the study. They were selected by using simple random sampling technique. A questionnaire was developed and validated through expert judgement and reliability co-efficient of 0.81 was obtained. It was used to gather relevant data from the respondents. Findings revealed that there was positive impact of ICT on academic performance of senior secondary schools students. The findings indicated the causes of poor academic performance among the students as lack of qualified teachers to teach in schools, peer group influence, and bullying. Significantly, the findings revealed that ICT had a positive impact on students’ academic performance. The null hypotheses were tested using t-test at 0.05 level of significance. It was discovered that there was significant difference between male and female secondary schools’ students' impact of ICT on academic performance in Gwagawalada Area Council of FCT-Abuja. Based on these findings, some recommendations were made which include: adequate funds should be provided towards procurement of ICT resources, relevant textbooks to enhance students’ active participation in learning processes and students should be provided with internet accessibility at inexpensive rate so as to create a platform for accessing useful information in the pursuit of academic excellence.Keywords: academic performance, impact, information communication technology, schools, students
Procedia PDF Downloads 219369 Non-Governmental Organisations and Human Development in Bauchi State, Nigeria
Authors: Sadeeq Launi
Abstract:
NGOs, the world over, have been recognized as part of the institutions that complement government activities in providing services to the people, particularly in respect of human development. This study examined the role played by the NGOs in human development in Bauchi State, Nigeria, between 2004 and 2013. The emphasis was on reproductive health and access to education role of the selected NGOs. All the research questions, objectives and hypotheses were stated in line with these variables. The theoretical framework that guided the study was the participatory development approach. Being a survey research, data were generated from both primary and secondary sources with questionnaires and interviews as the instruments for generating the primary data. The population of the study was made up of the staff of the selected NGOs, beneficiaries, health staff and school teachers in Bauchi State. The sample drawn from these categories were 90, 107 and 148 units respectively. Stratified random and simple random sampling techniques were adopted for NGOs staff, and Health staff and school teachers data were analyzed quantitatively and qualitatively and hypotheses were tested using Pearson Chi-square test through SPSS computer statistical package. The study revealed that despite the challenges facing NGOs operations in the study area, NGOs rendered services in the areas of health and education This research recommends among others that, both government and people should be more cooperative to NGOs to enable them provide more efficient and effective services. Governments at all levels should be more dedicated to increasing accessibility and affordability of basic education and reproductive health care facilities and services in Bauchi state through committing more resources to the Health and Education sectors, this would support and facilitate the complementary role of NGOs in providing teaching facilities, drugs, and other reproductive health services in the States. More enlightenment campaigns should be carried out by governments to sensitize the public, particularly women on the need to embrace immunization programmes for their children and antenatal care services being provided by both the government and NGOs.Keywords: access to education, human development, NGOs, reproductive health
Procedia PDF Downloads 176368 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English
Authors: Duong Thuy Nguyen, Giulia Bencini
Abstract:
The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing
Procedia PDF Downloads 152367 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter
Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai
Abstract:
Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking
Procedia PDF Downloads 482