Search results for: hardware errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1424

Search results for: hardware errors

344 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 68
343 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement

Procedia PDF Downloads 256
342 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 390
341 Third Eye: A Hybrid Portrayal of Visuospatial Attention through Eye Tracking Research and Modular Arithmetic

Authors: Shareefa Abdullah Al-Maqtari, Ruzaika Omar Basaree, Rafeah Legino

Abstract:

A pictorial representation of hybrid forms in science-art collaboration has become a crucial issue in the course of exploring a new painting technique development. This is straight related to the reception of an invisible-recognition phenomenology. In hybrid pictorial representation of invisible-recognition phenomenology, the challenging issue is how to depict the pictorial features of indescribable objects from its mental source, modality and transparency. This paper proposes the hybrid technique of painting Demonstrate, Resemble, and Synthesize (DRS) through a combination of the hybrid aspect-recognition representation of understanding picture, demonstrative mod, the number theory, pattern in the modular arithmetic system, and the coherence theory of visual attention in the dynamic scenes representation. Multi-methods digital gaze data analyses, pattern-modular table operation design, and rotation parameter were used for the visualization. In the scientific processes, Eye-trackingvideo-sections based was conducted using Tobii T60 remote eye tracking hardware and TobiiStudioTM analysis software to collect and analyze the eye movements of ten participants when watching the video clip, Alexander Paulikevitch’s performance’s ‘Tajwal’. Results: we found that correlation of fixation count in section one was positively and moderately correlated with section two Person’s (r=.10, p < .05, 2-tailed) as well as in fixation duration Person’s (r=.10, p < .05, 2-tailed). However, a paired-samples t-test indicates that scores were significantly higher for the section one (M = 2.2, SD = .6) than for the section two (M = 1.93, SD = .6) t(9) = 2.44, p < .05, d = 0.87. In the visual process, the exported data of gaze number N was resembled the hybrid forms of visuospatial attention using the table-mod-analyses operation. The explored hybrid guideline was simply applicable, and it could be as alternative approach to the sustainability of contemporary visual arts.

Keywords: science-art collaboration, hybrid forms, pictorial representation, visuospatial attention, modular arithmetic

Procedia PDF Downloads 339
340 Analysis on the Need of Engineering Drawing and Feasibility Study on 3D Model Based Engineering Implementation

Authors: Parthasarathy J., Ramshankar C. S.

Abstract:

Engineering drawings these days play an important role in every part of an industry. By and large, Engineering drawings are influential over every phase of the product development process. Traditionally, drawings are used for communication in industry because they are the clearest way to represent the product manufacturing information. Until recently, manufacturing activities were driven by engineering data captured in 2D paper documents or digital representations of those documents. The need of engineering drawing is inevitable. Still Engineering drawings are disadvantageous in re-entry of data throughout manufacturing life cycle. This document based approach is prone to errors and requires costly re-entry of data at every stage in the manufacturing life cycle. So there is a requirement to eliminate Engineering drawings throughout product development process and to implement 3D Model Based Engineering (3D MBE or 3D MBD). Adopting MBD appears to be the next logical step to continue reducing time-to-market and improve product quality. Ideally, by fully applying the MBD concept, the product definition will no longer rely on engineering drawings throughout the product lifecycle. This project addresses the need of Engineering drawing and its influence in various parts of an industry and the need to implement the 3D Model Based Engineering with its advantages and the technical barriers that must be overcome in order to implement 3D Model Based Engineering. This project also addresses the requirements of neutral formats and its realisation in order to implement the digital product definition principles in a light format. In order to prove the concepts of 3D Model Based Engineering, the screw jack body part is also demonstrated. At ZF Windpower Coimbatore Limited, 3D Model Based Definition is implemented to Torque Arm (Machining and Casting), Steel tube, Pinion shaft, Cover, Energy tube.

Keywords: engineering drawing, model based engineering MBE, MBD, CAD

Procedia PDF Downloads 408
339 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning

Authors: Hong Zhang

Abstract:

The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.

Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning

Procedia PDF Downloads 123
338 A Comparative Study of the Proposed Models for the Components of the National Health Information System

Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi

Abstract:

National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.

Keywords: National Health Information System, components of the NHIS, Lippeveld Model

Procedia PDF Downloads 395
337 Quality Control of 99mTc-Labeled Radiopharmaceuticals Using the Chromatography Strips

Authors: Yasuyuki Takahashi, Akemi Yoshida, Hirotaka Shimada

Abstract:

99mTc-2-methoxy-isobutyl-isonitrile (MIBI) and 99mTcmercaptoacetylgylcylglycyl-glycine (MAG3 ) are heat to 368-372K and are labeled with 99mTc-pertechnetate. Quality control (QC) of 99mTc-labeled radiopharmaceuticals is performed at hospitals, using liquid chromatography, which is difficult to perform in general hospitals. We used chromatography strips to simplify QC and investigated the effects of the test procedures on quality control. In this study is 99mTc- MAG3. Solvent using chloroform + acetone + tetrahydrofuran, and the gamma counter was ARC-380CL. The changed conditions are as follows; heating temperature, resting time after labeled, and expiration year for use: which were 293, 313, 333, 353 and 372K; 15 min (293K and 372K) and 1 hour (293K); and 2011, 2012, 2013, 2014 and 2015 respectively were tested. Measurement time using the gamma counter was one minute. A nuclear medical clinician decided the quality of the preparation in judging the usability of the retest agent. Two people conducted the test procedure twice, in order to compare reproducibility. The percentage of radiochemical purity (% RCP) was approximately 50% under insufficient heat treatment, which improved as the temperature and heating time increased. Moreover, the % RCP improved with time even under low temperatures. Furthermore, there was no deterioration with time after the expiration date. The objective of these tests was to determine soluble 99mTc impurities, including 99mTc-pertechnetate and the hydrolyzed-reduced 99mTc. Therefore, we assumed that insufficient heating and heating to operational errors in the labeling. It is concluded that quality control is a necessary procedure in nuclear medicine to ensure safe scanning. It is suggested that labeling is necessary to identify specifications.

Keywords: quality control, tc-99m labeled radio-pharmaceutical, chromatography strip, nuclear medicine

Procedia PDF Downloads 292
336 Improving Efficiency and Effectiveness of FMEA Studies

Authors: Joshua Loiselle

Abstract:

This paper discusses the challenges engineering teams face in conducting Failure Modes and Effects Analysis (FMEA) studies. This paper focuses on the specific topic of improving the efficiency and effectiveness of FMEA studies. Modern economic needs and increased business competition require engineers to constantly develop newer and better solutions within shorter timeframes and tighter margins. In addition, documentation requirements for meeting standards/regulatory compliance and customer needs are becoming increasingly complex and verbose. Managing open actions and continuous improvement activities across all projects, product variations, and processes in addition to daily engineering tasks is cumbersome, time consuming, and is susceptible to errors, omissions, and non-conformances. FMEA studies are proven methods for improving products and processes while subsequently reducing engineering workload and improving machine and resource availability through a pre-emptive, systematic approach of identifying, analyzing, and improving high-risk components. If implemented correctly, FMEA studies significantly reduce costs and improve productivity. However, the value of an effective FMEA is often shrouded by a lack of clarity and structure, misconceptions, and previous experiences and, as such, FMEA studies are frequently grouped with the other required information and documented retrospectively in preparation of customer requirements or audits. Performing studies in this way only adds cost to a project and perpetuates the misnomer that FMEA studies are not value-added activities. This paper discusses the benefits of effective FMEA studies, the challenges related to conducting FMEA studies, best practices for efficiently overcoming challenges via structure and automation, and the benefits of implementing those practices.

Keywords: FMEA, quality, APQP, PPAP

Procedia PDF Downloads 281
335 The Relationship between Renewable Energy, Real Income, Tourism and Air Pollution

Authors: Eyup Dogan

Abstract:

One criticism of the energy-growth-environment literature, to the best of our knowledge, is that only a few studies analyze the influence of tourism on CO₂ emissions even though tourism sector is closely related to the environment. The other criticism is the selection of methodology. Panel estimation techniques that fail to consider both heterogeneity and cross-sectional dependence across countries can cause forecasting errors. To fulfill the mentioned gaps in the literature, this study analyzes the impacts of real GDP, renewable energy and tourism on the levels of carbon dioxide (CO₂) emissions for the top 10 most-visited countries around the world. This study focuses on the top 10 touristic (most-visited) countries because they receive about the half of the worldwide tourist arrivals in late years and are among the top ones in 'Renewables Energy Country Attractiveness Index (RECAI)'. By looking at Pesaran’s CD test and average growth rates of variables for each country, we detect the presence of cross-sectional dependence and heterogeneity. Hence, this study uses second generation econometric techniques (cross-sectionally augmented Dickey-Fuller (CADF), and cross-sectionally augmented IPS (CIPS) unit root test, the LM bootstrap cointegration test, and the DOLS and the FMOLS estimators) which are robust to the mentioned issues. Therefore, the reported results become accurate and reliable. It is found that renewable energy mitigates the pollution whereas real GDP and tourism contribute to carbon emissions. Thus, regulatory policies are necessary to increase the awareness of sustainable tourism. In addition, the use of renewable energy and the adoption of clean technologies in tourism sector as well as in producing goods and services play significant roles in reducing the levels of emissions.

Keywords: air pollution, tourism, renewable energy, income, panel data

Procedia PDF Downloads 164
334 Performance Evaluation of the CareSTART S1 Analyzer for Quantitative Point-Of-Care Measurement of Glucose-6-Phosphate Dehydrogenase Activity

Authors: Haiyoung Jung, Mi Joung Leem, Sun Hwa Lee

Abstract:

Background & Objective: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a genetic abnormality that results in an inadequate amount of G6PD, leading to increased susceptibility of red blood cells to reactive oxygen species and hemolysis. The present study aimed to evaluate the careSTARTTM S1 analyzer for measuring G6PD activity to hemoglobin (Hb) ratio. Methods: Precision for G6PD activity and hemoglobin measurement was evaluated using control materials with two levels on five repeated runs per day for five days. The analytic performance of the careSTARTTM S1 analyzer was compared with spectrophotometry in 40 patient samples. Reference ranges suggested by the manufacturer were validated in 20 healthy males and females each. Results: The careSTARTTM S1 analyzer demonstrated precision of 6.0% for low-level (14~45 U/dL) and 2.7% for high-level (60~90 U/dL) control in G6PD activity, and 1.4% in hemoglobin (7.9~16.3 u/g Hb). A comparison study of G6PD to Hb ratio between the careSTARTTM S1 analyzer and spectrophotometry showed an average difference of 29.1% with a positive bias of the careSTARTTM S1 analyzer. All normal samples from the healthy population were validated for the suggested reference range for males (≥2.19 U/g Hb) and females (≥5.83 U/g Hb). Conclusion: The careSTARTTM S1 analyzer demonstrated good analytical performance and can replace the current spectrophotometric measurement of G6PD enzyme activity. In the aspect of the management of clinical laboratories, it can be a reasonable option as a point-of-care analyzer with minimal handling of samples and reagents, in addition to the automatic calculation of the ratio of measured G6PD activity and Hb concentration, to minimize any clerical errors involved with manual calculation.

Keywords: POCT, G6PD, performance evaluation, careSTART

Procedia PDF Downloads 47
333 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age

Authors: S. N. Susenburger, D. Boecker

Abstract:

In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.

Keywords: innovation management, innovation culture, innovation methodologies, digital transformation

Procedia PDF Downloads 111
332 The Architectural Conservation and Restoration Problems of Istanbul’s “Yalı” Waterfront Mansions

Authors: Zeynep Tanrıverdi

Abstract:

The Bosphorus is an international waterway in Istanbul city of Turkey connecting the Sea of Marmara and the Black Sea. The Bosphorus, which has formed an important part of the silhouette of Istanbul throughout history, has also influenced the design of the coastal structures built around it. The waterfront mansions, which are located on both sides of the Bosphorus by the sea, and can be generally of two or three storeys, are called “yalı”. The yalı buildings with their architectural characteristics of the traditional Turkish House are the most grandiose examples of Ottoman residential architecture. However, the classical Ottoman yalı architecture of the 18th century can only be seen in engravings, and today only the modest and smaller yalı examples from the 19th century can be seen because of their disappearance over time. The study aims to reveal the architectural conservation and restoration problems of waterfront mansions and propose solutions for them. Firstly, the development of the waterfront mansion architecture in Bosphorus was evaluated in its historical process. Secondly, the waterfront mansions and their architectural features were explained. Thirdly, the architectural conservation and restoration problems that caused the disappearance of waterfront mansions were discussed. These problems include disruptions in legal regulations and practices about the Bosphorus, dramatic changes in Turkey’s socio-cultural life from the Ottoman Empire to the present, inadequacies in economic resources, negative environmental effects, and errors in restoration works. Finally, solution suggestions were proposed for the problems that threaten the protection of waterfront mansions. In the study, literature on waterfront mansions was reviewed using historical reports, photographs, maps, and drawings in archival documents. It is hoped that this study will contribute the conservation of the “Yalı” waterfront mansions, which occupy a particular role in the cultural heritage of Turkey, and to their transmission with their authentic values to the next generation.

Keywords: bosphorus architecture, conservation, heritage, Istanbul, waterfront mansions (yalı)

Procedia PDF Downloads 47
331 Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India

Authors: Ranitha Guna Selvi D, Ramakrishnan R, Mohideen Abdul Kader

Abstract:

Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors.

Keywords: refractive error, uncorrected refractive error, vision center, vision technician, teleconsultation

Procedia PDF Downloads 114
330 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 441
329 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications

Procedia PDF Downloads 299
328 A High Reliable Space-Borne File System with Applications of Device Partition and Intra-Channel Pipeline in Nand Flash

Authors: Xin Li, Ji-Yang Yu, Yue-Hua Niu, Lu-Yuan Wang

Abstract:

As an inevitable chain of the space data acquirement system, space-borne storage system based on Nand Flash has gradually been implemented in spacecraft. In face of massive, parallel and varied data on board, efficient data management become an important issue of storage research. Face to the requirements of high-performance and reliability in Nand Flash storage system, a combination of hardware and file system design can drastically increase system dependability, even for missions with a very long duration. More sophisticated flash storage concepts with advanced operating systems have been researched to improve the reliability of Nand Flash storage system on satellites. In this paper, architecture of file system with multi-channel data acquisition and storage on board is proposed, which obtains large-capacity and high-performance with the combine of intra-channel pipeline and device partition in Nand Flash. Multi-channel data in different rate are stored as independent files with parallel-storage system in device partition, which assures the high-effective and reliable throughput of file treatments. For massive and high-speed data storage, an efficiency assessment model is established to calculate the bandwidth formula of intra-channel pipeline. Information tables designed in Magnetoresistive RAM (MRAM) hold the management of bad block in Nand Flash and the arrangement of file system address for the high-reliability of data storage. During the full-load test, the throughput of 3D PLUS Module 160Gb Nand Flash can reach 120Mbps for store and reach 120Mbps for playback, which efficiently satisfies the requirement of multi-channel data acquisition in Satellite. Compared with previous literature, the results of experiments verify the advantages of the proposed system.

Keywords: device partition architecture, intra-channel pipelining, nand flash, parallel storage

Procedia PDF Downloads 267
327 Explaining the Role of Iran Health System in Polypharmacy among the Elderly

Authors: Mohsen Shati, Seyede Salehe Mortazavi, Seyed Kazem Malakouti, Hamidreza Khanke Fazlollah Ahmadi

Abstract:

Taking unnecessary or excessive medication or using drugs with no indication (polypharmacy) by people of all ages, especially the elderly, is associated with increased adverse drug reactions (ADR), medical errors, hospitalization and escalating the costs. It may be facilitated or impeded by the healthcare system. In this study, we are going to describe the role of the health system in the practice of polypharmacy in Iranian elderly. In this Inductive qualitative content analysis using Graneheim and Lundman methods, purposeful sample selection until saturation has been made. Participants have been selected from doctors, pharmacists, policy-makers and the elderly. A total of 25 persons (9 men and 16 women) have participated in this study. Data analysis after incorporating codes with similar characteristics revealed 14 subcategories and six main categories of the referral system, physicians’ accessibility, health data management, drug market, laws enforcement, and social protection. Some of the conditions of the healthcare system have given rise to polypharmacy in the elderly. In the absence of a comprehensive specialty and subspecialty referral system, patients may go to any physician office so may well be confused about numerous doctors' prescriptions. Electronic records not being prepared for the patients, failure to comply with laws, lack of robust enforcement for the existing laws and close surveillance are among the contributing factors. Inadequate insurance and supportive services are also evident. Age-specific care providing has not yet been institutionalized, while, inadequate specialist workforce playing a major role. So, one may not ignore the health system as contributing factor in designing effective interventions to fix the problem.

Keywords: elderly, polypharmacy, health system, qualitative study

Procedia PDF Downloads 133
326 Delisting Wave: Corporate Financial Distress, Institutional Investors Perception and Performance of South African Listed Firms

Authors: Adebiyi Sunday Adeyanju, Kola Benson Ajeigbe, Fortune Ganda

Abstract:

In the past three decades, there has been a notable increase in the number of firms delisting from the Johannesburg Stock Exchange (JSE) in South Africa. The recent increasing rate of delisting waves of corporate listed firms motivated this study. This study aims to explore the influence of institutional investor perceptions on the financial distress experienced by delisted firms within the South African market. The study further examined the impact of financial distress on the corporate performance of delisted firms. Using the data of delisted firms spanning from 2000 to 2023 and the FGLS (Feasible Generalized Least Squares) for the short run and PCSE (Panel-Corrected Standard Errors) for the long run effects of the relationship. The finding indicated that a decline in institutional investors’ perceptions was associated with the corporate financial distress of the delisted firms, particularly during the delisting year and the few years preceding the announcement of the delisting. This study addressed the importance of investor recognition in corporate financial distress and the delisting wave among listed firms- a finding supporting the stakeholder theory. This study is an insight for companies’ managements, investors, governments, policymakers, stockbrokers, lending institutions, bankers, the stock market, and other stakeholders in their various decision-making endeavours. Based on the above findings, it was recommended that corporate managements should improve their governance strategies that can help companies’ financial performances. Accountability and transparency through governance must also be improved upon with government support through the introduction of policies and strategies and enabling an easy environment that can help companies perform better.

Keywords: delisting wave, institutional investors, financial distress, corporate performance, investors’ perceptions

Procedia PDF Downloads 20
325 Bank, Stock Market Efficiency and Economic Growth: Lessons for ASEAN-5

Authors: Tan Swee Liang

Abstract:

This paper estimates bank and stock market efficiency associations with real per capita GDP growth by examining panel-data across three different regions using Panel-Corrected Standard Errors (PCSE) regression developed by Beck and Katz (1995). Data from five economies in ASEAN (Singapore, Malaysia, Thailand, Philippines, and Indonesia), five economies in Asia (Japan, China, Hong Kong SAR, South Korea, and India) and seven economies in OECD (Australia, Canada, Denmark, Norway, Sweden, United Kingdom U.K., and United States U.S.), between 1990 and 2017 are used. Empirical findings suggest one, for Asia-5 high bank net interest margin means greater bank profitability, hence spurring economic growth. Two, for OECD-7 low bank overhead costs (as a share of total assets) may reflect weak competition and weak investment in providing superior banking services, hence dampening economic growth. Three, stock market turnover ratio has negative association with OECD-7 economic growth, but a positive association with Asia-5, which suggest the relationship between liquidity and growth is ambiguous. Lastly, for ASEAN-5 high bank overhead costs (as a share of total assets) may suggest expenses have not been channelled efficiently to income generating activities. One practical implication of the findings is that policy makers should take necessary measures toward financial liberalisation policies that boost growth through the efficiency channel, so that funds are efficiently allocated through the financial system between financial and real sectors.

Keywords: financial development, banking system, capital markets, economic growth

Procedia PDF Downloads 115
324 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 139
323 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 105
322 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 111
321 Evaluation of the Grammar Questions at the Undergraduate Level

Authors: Preeti Gacche

Abstract:

A considerable part of undergraduate level English Examination papers is devoted to grammar. Hence the grammar questions in the question papers are evaluated and the opinions of both students and teachers about them are obtained and analyzed. A grammar test of 100 marks is administered to 43 students to check their performance. The question papers have been evaluated by 10 different teachers and their scores compared. The analysis of 38 University question papers reveals that on an average 20 percent marks are allotted to grammar. Almost all the grammar topics are tested. Abundant use of grammatical terminology is observed in the questions. Decontextualization, repetition, possibility of multiple correct answers and grammatical errors in framing the questions have been observed. Opinions of teachers and students about grammar questions vary in many respects. The students responses are analyzed medium-wise and sex-wise. The Medium at the School level and the sex of the students are found to play no role as far as interest in the study of grammar is concerned. English medium students solve grammar questions intuitively whereas non-English medium students are required to recollect the rules of grammar. Prepositions, Verbs, Articles and Model auxiliaries are found to be easy topics for most students whereas the use of conjunctions is the most difficult topic. Out of context items of grammar are difficult to answer in comparison with contextualized items of grammar. Hence contextualized texts to test grammar items are desirable. No formal training in setting questions is imparted to teachers by the competent authorities like the University. They need to be trained in testing. Statistically there is no significant change of score with the change in the rater in testing of grammar items. There is scope of future improvement. The question papers need to be evaluated and feedback needs to be obtained from students and teachers for future improvement.

Keywords: context, evaluation, grammar, tests

Procedia PDF Downloads 328
320 The Influence of Emotion on Numerical Estimation: A Drone Operators’ Context

Authors: Ludovic Fabre, Paola Melani, Patrick Lemaire

Abstract:

The goal of this study was to test whether and how emotions influence drone operators in estimation skills. The empirical study was run in the context of numerical estimation. Participants saw a two-digit number together with a collection of cars. They had to indicate whether the stimuli collection was larger or smaller than the number. The two-digit numbers ranged from 12 to 27, and collections included 3-36 cars. The presentation of the collections was dynamic (each car moved 30 deg. per second on the right). Half the collections were smaller collections (including fewer than 20 cars), and the other collections were larger collections (i.e., more than 20 cars). Splits between the number of cars in a collection and the two-digit number were either small (± 1 or 2 units; e.g., the collection included 17 cars and the two-digit number was 19) or larger (± 8 or 9 units; e.g., 17 cars and '9'). Half the collections included more items (and half fewer items) than the number indicated by the two-digit number. Before and after each trial, participants saw an image inducing negative emotions (e.g., mutilations) or neutral emotions (e.g., candle) selected from International Affective Picture System (IAPS). At the end of each trial, participants had to say if the second picture was the same as or different from the first. Results showed different effects of emotions on RTs and percent errors. Participants’ performance was modulated by emotions. They were slower on negative trials compared to the neutral trials, especially on the most difficult items. They errored more on small-split than on large-split problems. Moreover, participants highly overestimated the number of cars when in a negative emotional state. These findings suggest that emotions influence numerical estimation, that effects of emotion in estimation interact with stimuli characteristics. They have important implications for understanding the role of emotions on estimation skills, and more generally, on how emotions influence cognition.

Keywords: drone operators, emotion, numerical estimation, arithmetic

Procedia PDF Downloads 93
319 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 297
318 Splinting in Plastic Surgery Hand Trauma Setting

Authors: Samar Mousa, Rebecca Shirley

Abstract:

Injuries to the hand account for 20% of all emergency department attendances, with an estimated annual treatment cost of over £100 million in the UK. Functional impairments as a result of hand injuries often necessitate absence from employment, resulting in reduced productivity estimated to incur an additional £600m loss to the UK economy. Appropriate and early management is vital to preserve anatomy, prevent stiffness and allow function. The initial assessment and management of hand injuries are usually undertaken by junior staff, many of whom have little or no training or experience in splinting hand fractures. In our plastic surgery department at Stoke Mandeville hospital Buckinghamshire trust, we carried out an audit project to detect errors in hand splinting in the period between April 2022 and July 2022 and find out measures to support junior doctors, nurses and hand therapists in providing the best possible care for hand trauma patients. Our standards were The British Society for Surgery of the Hand (BSSH) standard of care in hand trauma, AO surgery reference and Stoke Mandeville hospital hand therapy mini protocol Feb 2022 During the period of 4 months, 5 cases were identified. Two cases of wrong splint choice, two cases of early removal of the splint and one tight splint that required change. In order to avoid those mistakes, a training program was given to junior doctors and nurses in collaboration with the hand therapy team regarding ways of splinting the hand in different injuries like fractures, tendons injuries, muscle injuries and ligamentous injuries. In addition to, a poster hung in the examination rooms and theatres to help junior doctors reach the correct decision.

Keywords: splinting, hand trauma, plastic surgery, tendon injury, hand fracrture

Procedia PDF Downloads 64
317 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures

Authors: Michał Lidner, Zbigniew SzcześNiak

Abstract:

The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.

Keywords: adiabatic process, air shock wave, explosive, finite volume method

Procedia PDF Downloads 166
316 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model

Authors: Fu Jia

Abstract:

The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.

Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping

Procedia PDF Downloads 236
315 Forecasting Lake Malawi Water Level Fluctuations Using Stochastic Models

Authors: M. Mulumpwa, W. W. L. Jere, M. Lazaro, A. H. N. Mtethiwa

Abstract:

The study considered Seasonal Autoregressive Integrated Moving Average (SARIMA) processes to select an appropriate stochastic model to forecast the monthly data from the Lake Malawi water levels for the period 1986 through 2015. The appropriate model was chosen based on SARIMA (p, d, q) (P, D, Q)S. The Autocorrelation function (ACF), Partial autocorrelation (PACF), Akaike Information Criteria (AIC), Bayesian Information Criterion (BIC), Box–Ljung statistics, correlogram and distribution of residual errors were estimated. The SARIMA (1, 1, 0) (1, 1, 1)12 was selected to forecast the monthly data of the Lake Malawi water levels from August, 2015 to December, 2021. The plotted time series showed that the Lake Malawi water levels are decreasing since 2010 to date but not as much as was the case in 1995 through 1997. The future forecast of the Lake Malawi water levels until 2021 showed a mean of 474.47 m ranging from 473.93 to 475.02 meters with a confidence interval of 80% and 90% against registered mean of 473.398 m in 1997 and 475.475 m in 1989 which was the lowest and highest water levels in the lake respectively since 1986. The forecast also showed that the water levels of Lake Malawi will drop by 0.57 meters as compared to the mean water levels recorded in the previous years. These results suggest that the Lake Malawi water level may not likely go lower than that recorded in 1997. Therefore, utilisation and management of water-related activities and programs among others on the lake should provide room for such scenarios. The findings suggest a need to manage the Lake Malawi jointly and prudently with other stakeholders starting from the catchment area. This will reduce impacts of anthropogenic activities on the lake’s water quality, water level, aquatic and adjacent terrestrial ecosystems thereby ensuring its resilience to climate change impacts.

Keywords: forecasting, Lake Malawi, water levels, water level fluctuation, climate change, anthropogenic activities

Procedia PDF Downloads 203