Search results for: Errors and Mistakes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1071

Search results for: Errors and Mistakes

291 Mapping of Geological Structures Using Aerial Photography

Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash

Abstract:

Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.

Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures

Procedia PDF Downloads 686
290 Origin Variability of Superior Vesical Artery

Authors: Waseem Al-Talalwah

Abstract:

The superior vesical artery usually arises directly from the anterior division of the internal iliac artery. It may arise from the umbilical artery as three or four branches to supply the upper and middle parts of bladder. Current study focuses on the different origins of the superior vesical artery to provide a sufficient data for surgeons to disease iatrogenic fault. The superior vesical artery arises directly from the anterior division of the internal iliac artery in 24.5% whereas it arises indirectly as from umbilical artery in 83.7%. Further, it may arise from any branch of the anterior division as from the utrine and obturator arteries in 6.1% and in 6.3% respectively. It also shares the origin of the internal pudendal and inferior glutyeal artery as it arises from the gluteopudendal trunk in 4.1%. The superior vesical artery arises as a single, double, triple and quadruple in 69.4%, 20.4%, 8.2% and 2% respectively. In case of cystectomy for bladder cancer, surgeons have to be aware of the origin variability of superior vesical artery to prevent post-surgical complication such as intra-pelvic bleeding. Also, the as intra-pelvic bleeding has to be expected in case of hysterectomy therefore a great caution of the vesical branches arising from uterine artery has to be considered. In case of aneurysm resection of inferior gluteal artery arising from the gluteopudendal trunk, the surgeons have to be careful of the vascular supply of urinary bladder coming from above and below this common trunk as from superior and inferior vesical arteries respectively. Therefore, present study increases the awareness of clinical significance of superior vesical artery origin for surgeons to minimise the iatroginc errors.

Keywords: superior vesical artery, anterior division, internal iliac, internal pudendal, inferior glutyeal, intra-pelvic bleeding, hysterectomy, cystectomy

Procedia PDF Downloads 394
289 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 427
288 Apparent Temperature Distribution on Scaffoldings during Construction Works

Authors: I. Szer, J. Szer, K. Czarnocki, E. Błazik-Borowa

Abstract:

People on construction scaffoldings work in dynamically changing, often unfavourable climate. Additionally, this kind of work is performed on low stiffness structures at high altitude, which increases the risk of accidents. It is therefore desirable to define the parameters of the work environment that contribute to increasing the construction worker occupational safety level. The aim of this article is to present how changes in microclimate parameters on scaffolding can impact the development of dangerous situations and accidents. For this purpose, indicators based on the human thermal balance were used. However, use of this model under construction conditions is often burdened by significant errors or even impossible to implement due to the lack of precise data. Thus, in the target model, the modified parameter was used – apparent environmental temperature. Apparent temperature in the proposed Scaffold Use Risk Assessment Model has been a perceived outdoor temperature, caused by the combined effects of air temperature, radiative temperature, relative humidity and wind speed (wind chill index, heat index). In the paper, correlations between component factors and apparent temperature for facade scaffolding with a width of 24.5 m and a height of 42.3 m, located at south-west side of building are presented. The distribution of factors on the scaffolding has been used to evaluate fitting of the microclimate model. The results of the studies indicate that observed ranges of apparent temperature on the scaffolds frequently results in a worker’s inability to adapt. This leads to reduced concentration and increased fatigue, adversely affects health, and consequently increases the risk of dangerous situations and accidental injuries

Keywords: apparent temperature, health, safety work, scaffoldings

Procedia PDF Downloads 182
287 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow

Procedia PDF Downloads 135
286 A Preparatory Method for Building Construction Implemented in a Case Study in Brazil

Authors: Aline Valverde Arroteia, Tatiana Gondim do Amaral, Silvio Burrattino Melhado

Abstract:

During the last twenty years, the construction field in Brazil has evolved significantly in response to its market growing and competitiveness. However, this evolving path has faced many obstacles such as cultural barriers and the lack of efforts to achieve quality at the construction site. At the same time, the greatest amount of information generated on the designing or construction phases is lost due to the lack of an effective coordination of these activities. Face this problem, the aim of this research was to implement a French method named PEO which means preparation for building construction (in Portuguese) seeking to understand the design management process and its interface with the building construction phase. The research method applied was qualitative, and it was carried out through two case studies in the city of Goiania, in Goias, Brazil. The research was divided into two stages called pilot study at Company A and implementation of PEO at Company B. After the implementation; the results demonstrated the PEO method's effectiveness and feasibility while a booster on the quality improvement of design management. The analysis showed that the method has a purpose to improve the design and allow the reduction of failures, errors and rework commonly found in the production of buildings. Therefore, it can be concluded that the PEO is feasible to be applied to real estate and building companies. But, companies need to believe in the contribution they can make to the discovery of design failures in conjunction with other stakeholders forming a construction team. The result of PEO can be maximized when adopting the principles of simultaneous engineering and insertion of new computer technologies, which use a three-dimensional model of the building with BIM process.

Keywords: communication, design and construction interface management, preparation for building construction (PEO), proactive coordination (CPA)

Procedia PDF Downloads 162
285 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 40
284 Formulation and Optimization of Topical 5-Fluorouracil Microemulsions Using Central Compisite Design

Authors: Sudhir Kumar, V. R. Sinha

Abstract:

Water in oil topical microemulsions of 5-FU were developed and optimized using face centered central composite design. Topical w/o microemulsion of 5-FU were prepared using sorbitan monooleate (Span 80), polysorbate 80 (Tween 80), with different oils such as oleic acid (OA), triacetin (TA), and isopropyl myristate (IPM). The ternary phase diagrams designated the microemulsion region and face centered central composite design helped in determining the effects of selected variables viz. type of oil, smix ratio and water concentration on responses like drug content, globule size and viscosity of microemulsions. The CCD design exhibited that the factors have statistically significant effects (p<0.01) on the selected responses. The actual responses showed excellent agreement with the predicted values as suggested by the CCD with lower residual standard error. Similarly, the optimized values were found within the range as predicted by the model. Furthermore, other characteristics of microemulsions like pH, conductivity were investigated. For the optimized microemulsion batch, ex-vivo skin flux, skin irritation and retention studies were performed and compared with marketed 5-FU formulation. In ex vivo skin permeation studies, higher skin retention of drug and minimal flux was achieved for optimized microemulsion batch then the marketed cream. Results confirmed the actual responses to be in agreement with predicted ones with least residual standard errors. Controlled release of drug was achieved for the optimized batch with higher skin retention of 5-FU, which can further be utilized for the treatment of many dermatological disorders.

Keywords: 5-FU, central composite design, microemulsion, ternanry phase diagram

Procedia PDF Downloads 479
283 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 94
282 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement

Procedia PDF Downloads 281
281 Analysis on the Need of Engineering Drawing and Feasibility Study on 3D Model Based Engineering Implementation

Authors: Parthasarathy J., Ramshankar C. S.

Abstract:

Engineering drawings these days play an important role in every part of an industry. By and large, Engineering drawings are influential over every phase of the product development process. Traditionally, drawings are used for communication in industry because they are the clearest way to represent the product manufacturing information. Until recently, manufacturing activities were driven by engineering data captured in 2D paper documents or digital representations of those documents. The need of engineering drawing is inevitable. Still Engineering drawings are disadvantageous in re-entry of data throughout manufacturing life cycle. This document based approach is prone to errors and requires costly re-entry of data at every stage in the manufacturing life cycle. So there is a requirement to eliminate Engineering drawings throughout product development process and to implement 3D Model Based Engineering (3D MBE or 3D MBD). Adopting MBD appears to be the next logical step to continue reducing time-to-market and improve product quality. Ideally, by fully applying the MBD concept, the product definition will no longer rely on engineering drawings throughout the product lifecycle. This project addresses the need of Engineering drawing and its influence in various parts of an industry and the need to implement the 3D Model Based Engineering with its advantages and the technical barriers that must be overcome in order to implement 3D Model Based Engineering. This project also addresses the requirements of neutral formats and its realisation in order to implement the digital product definition principles in a light format. In order to prove the concepts of 3D Model Based Engineering, the screw jack body part is also demonstrated. At ZF Windpower Coimbatore Limited, 3D Model Based Definition is implemented to Torque Arm (Machining and Casting), Steel tube, Pinion shaft, Cover, Energy tube.

Keywords: engineering drawing, model based engineering MBE, MBD, CAD

Procedia PDF Downloads 435
280 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning

Authors: Hong Zhang

Abstract:

The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.

Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning

Procedia PDF Downloads 144
279 Quality Control of 99mTc-Labeled Radiopharmaceuticals Using the Chromatography Strips

Authors: Yasuyuki Takahashi, Akemi Yoshida, Hirotaka Shimada

Abstract:

99mTc-2-methoxy-isobutyl-isonitrile (MIBI) and 99mTcmercaptoacetylgylcylglycyl-glycine (MAG3 ) are heat to 368-372K and are labeled with 99mTc-pertechnetate. Quality control (QC) of 99mTc-labeled radiopharmaceuticals is performed at hospitals, using liquid chromatography, which is difficult to perform in general hospitals. We used chromatography strips to simplify QC and investigated the effects of the test procedures on quality control. In this study is 99mTc- MAG3. Solvent using chloroform + acetone + tetrahydrofuran, and the gamma counter was ARC-380CL. The changed conditions are as follows; heating temperature, resting time after labeled, and expiration year for use: which were 293, 313, 333, 353 and 372K; 15 min (293K and 372K) and 1 hour (293K); and 2011, 2012, 2013, 2014 and 2015 respectively were tested. Measurement time using the gamma counter was one minute. A nuclear medical clinician decided the quality of the preparation in judging the usability of the retest agent. Two people conducted the test procedure twice, in order to compare reproducibility. The percentage of radiochemical purity (% RCP) was approximately 50% under insufficient heat treatment, which improved as the temperature and heating time increased. Moreover, the % RCP improved with time even under low temperatures. Furthermore, there was no deterioration with time after the expiration date. The objective of these tests was to determine soluble 99mTc impurities, including 99mTc-pertechnetate and the hydrolyzed-reduced 99mTc. Therefore, we assumed that insufficient heating and heating to operational errors in the labeling. It is concluded that quality control is a necessary procedure in nuclear medicine to ensure safe scanning. It is suggested that labeling is necessary to identify specifications.

Keywords: quality control, tc-99m labeled radio-pharmaceutical, chromatography strip, nuclear medicine

Procedia PDF Downloads 322
278 Improving Efficiency and Effectiveness of FMEA Studies

Authors: Joshua Loiselle

Abstract:

This paper discusses the challenges engineering teams face in conducting Failure Modes and Effects Analysis (FMEA) studies. This paper focuses on the specific topic of improving the efficiency and effectiveness of FMEA studies. Modern economic needs and increased business competition require engineers to constantly develop newer and better solutions within shorter timeframes and tighter margins. In addition, documentation requirements for meeting standards/regulatory compliance and customer needs are becoming increasingly complex and verbose. Managing open actions and continuous improvement activities across all projects, product variations, and processes in addition to daily engineering tasks is cumbersome, time consuming, and is susceptible to errors, omissions, and non-conformances. FMEA studies are proven methods for improving products and processes while subsequently reducing engineering workload and improving machine and resource availability through a pre-emptive, systematic approach of identifying, analyzing, and improving high-risk components. If implemented correctly, FMEA studies significantly reduce costs and improve productivity. However, the value of an effective FMEA is often shrouded by a lack of clarity and structure, misconceptions, and previous experiences and, as such, FMEA studies are frequently grouped with the other required information and documented retrospectively in preparation of customer requirements or audits. Performing studies in this way only adds cost to a project and perpetuates the misnomer that FMEA studies are not value-added activities. This paper discusses the benefits of effective FMEA studies, the challenges related to conducting FMEA studies, best practices for efficiently overcoming challenges via structure and automation, and the benefits of implementing those practices.

Keywords: FMEA, quality, APQP, PPAP

Procedia PDF Downloads 304
277 The Relationship between Renewable Energy, Real Income, Tourism and Air Pollution

Authors: Eyup Dogan

Abstract:

One criticism of the energy-growth-environment literature, to the best of our knowledge, is that only a few studies analyze the influence of tourism on CO₂ emissions even though tourism sector is closely related to the environment. The other criticism is the selection of methodology. Panel estimation techniques that fail to consider both heterogeneity and cross-sectional dependence across countries can cause forecasting errors. To fulfill the mentioned gaps in the literature, this study analyzes the impacts of real GDP, renewable energy and tourism on the levels of carbon dioxide (CO₂) emissions for the top 10 most-visited countries around the world. This study focuses on the top 10 touristic (most-visited) countries because they receive about the half of the worldwide tourist arrivals in late years and are among the top ones in 'Renewables Energy Country Attractiveness Index (RECAI)'. By looking at Pesaran’s CD test and average growth rates of variables for each country, we detect the presence of cross-sectional dependence and heterogeneity. Hence, this study uses second generation econometric techniques (cross-sectionally augmented Dickey-Fuller (CADF), and cross-sectionally augmented IPS (CIPS) unit root test, the LM bootstrap cointegration test, and the DOLS and the FMOLS estimators) which are robust to the mentioned issues. Therefore, the reported results become accurate and reliable. It is found that renewable energy mitigates the pollution whereas real GDP and tourism contribute to carbon emissions. Thus, regulatory policies are necessary to increase the awareness of sustainable tourism. In addition, the use of renewable energy and the adoption of clean technologies in tourism sector as well as in producing goods and services play significant roles in reducing the levels of emissions.

Keywords: air pollution, tourism, renewable energy, income, panel data

Procedia PDF Downloads 184
276 Performance Evaluation of the CareSTART S1 Analyzer for Quantitative Point-Of-Care Measurement of Glucose-6-Phosphate Dehydrogenase Activity

Authors: Haiyoung Jung, Mi Joung Leem, Sun Hwa Lee

Abstract:

Background & Objective: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a genetic abnormality that results in an inadequate amount of G6PD, leading to increased susceptibility of red blood cells to reactive oxygen species and hemolysis. The present study aimed to evaluate the careSTARTTM S1 analyzer for measuring G6PD activity to hemoglobin (Hb) ratio. Methods: Precision for G6PD activity and hemoglobin measurement was evaluated using control materials with two levels on five repeated runs per day for five days. The analytic performance of the careSTARTTM S1 analyzer was compared with spectrophotometry in 40 patient samples. Reference ranges suggested by the manufacturer were validated in 20 healthy males and females each. Results: The careSTARTTM S1 analyzer demonstrated precision of 6.0% for low-level (14~45 U/dL) and 2.7% for high-level (60~90 U/dL) control in G6PD activity, and 1.4% in hemoglobin (7.9~16.3 u/g Hb). A comparison study of G6PD to Hb ratio between the careSTARTTM S1 analyzer and spectrophotometry showed an average difference of 29.1% with a positive bias of the careSTARTTM S1 analyzer. All normal samples from the healthy population were validated for the suggested reference range for males (≥2.19 U/g Hb) and females (≥5.83 U/g Hb). Conclusion: The careSTARTTM S1 analyzer demonstrated good analytical performance and can replace the current spectrophotometric measurement of G6PD enzyme activity. In the aspect of the management of clinical laboratories, it can be a reasonable option as a point-of-care analyzer with minimal handling of samples and reagents, in addition to the automatic calculation of the ratio of measured G6PD activity and Hb concentration, to minimize any clerical errors involved with manual calculation.

Keywords: POCT, G6PD, performance evaluation, careSTART

Procedia PDF Downloads 64
275 The Architectural Conservation and Restoration Problems of Istanbul’s “Yalı” Waterfront Mansions

Authors: Zeynep Tanrıverdi

Abstract:

The Bosphorus is an international waterway in Istanbul city of Turkey connecting the Sea of Marmara and the Black Sea. The Bosphorus, which has formed an important part of the silhouette of Istanbul throughout history, has also influenced the design of the coastal structures built around it. The waterfront mansions, which are located on both sides of the Bosphorus by the sea, and can be generally of two or three storeys, are called “yalı”. The yalı buildings with their architectural characteristics of the traditional Turkish House are the most grandiose examples of Ottoman residential architecture. However, the classical Ottoman yalı architecture of the 18th century can only be seen in engravings, and today only the modest and smaller yalı examples from the 19th century can be seen because of their disappearance over time. The study aims to reveal the architectural conservation and restoration problems of waterfront mansions and propose solutions for them. Firstly, the development of the waterfront mansion architecture in Bosphorus was evaluated in its historical process. Secondly, the waterfront mansions and their architectural features were explained. Thirdly, the architectural conservation and restoration problems that caused the disappearance of waterfront mansions were discussed. These problems include disruptions in legal regulations and practices about the Bosphorus, dramatic changes in Turkey’s socio-cultural life from the Ottoman Empire to the present, inadequacies in economic resources, negative environmental effects, and errors in restoration works. Finally, solution suggestions were proposed for the problems that threaten the protection of waterfront mansions. In the study, literature on waterfront mansions was reviewed using historical reports, photographs, maps, and drawings in archival documents. It is hoped that this study will contribute the conservation of the “Yalı” waterfront mansions, which occupy a particular role in the cultural heritage of Turkey, and to their transmission with their authentic values to the next generation.

Keywords: bosphorus architecture, conservation, heritage, Istanbul, waterfront mansions (yalı)

Procedia PDF Downloads 77
274 Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India

Authors: Ranitha Guna Selvi D, Ramakrishnan R, Mohideen Abdul Kader

Abstract:

Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors.

Keywords: refractive error, uncorrected refractive error, vision center, vision technician, teleconsultation

Procedia PDF Downloads 142
273 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 470
272 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Check, diabetes, neural network, pattern recognition

Procedia PDF Downloads 147
271 Elements of Creativity and Innovation

Authors: Fadwa Al Bawardi

Abstract:

In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.

Keywords: innovation, creativity, factors, tools

Procedia PDF Downloads 55
270 Cyber Security and Risk Assessment of the e-Banking Services

Authors: Aisha F. Bushager

Abstract:

Today we are more exposed than ever to cyber threats and attacks at personal, community, organizational, national, and international levels. More aspects of our lives are operating on computer networks simply because we are living in the fifth domain, which is called the Cyberspace. One of the most sensitive areas that are vulnerable to cyber threats and attacks is the Electronic Banking (e-Banking) area, where the banking sector is providing online banking services to its clients. To be able to obtain the clients trust and encourage them to practice e-Banking, also, to maintain the services provided by the banks and ensure safety, cyber security and risks control should be given a high priority in the e-banking area. The aim of the study is to carry out risk assessment on the e-banking services and determine the cyber threats, cyber attacks, and vulnerabilities that are facing the e-banking area specifically in the Kingdom of Bahrain. To collect relevant data, structured interviews were taken place with e-banking experts in different banks. Then, collected data where used as in input to the risk management framework provided by the National Institute of Standards and Technology (NIST), which was the model used in the study to assess the risks associated with e-banking services. The findings of the study showed that the cyber threats are commonly human errors, technical software or hardware failure, and hackers, on the other hand, the most common attacks facing the e-banking sector were phishing, malware attacks, and denial-of-service. The risks associated with the e-banking services were around the moderate level, however, more controls and countermeasures must be applied to maintain the moderate level of risks. The results of the study will help banks discover their vulnerabilities and maintain their online services, in addition, it will enhance the cyber security and contribute to the management and control of risks that are facing the e-banking sector.

Keywords: cyber security, e-banking, risk assessment, threats identification

Procedia PDF Downloads 350
269 Explaining the Role of Iran Health System in Polypharmacy among the Elderly

Authors: Mohsen Shati, Seyede Salehe Mortazavi, Seyed Kazem Malakouti, Hamidreza Khanke Fazlollah Ahmadi

Abstract:

Taking unnecessary or excessive medication or using drugs with no indication (polypharmacy) by people of all ages, especially the elderly, is associated with increased adverse drug reactions (ADR), medical errors, hospitalization and escalating the costs. It may be facilitated or impeded by the healthcare system. In this study, we are going to describe the role of the health system in the practice of polypharmacy in Iranian elderly. In this Inductive qualitative content analysis using Graneheim and Lundman methods, purposeful sample selection until saturation has been made. Participants have been selected from doctors, pharmacists, policy-makers and the elderly. A total of 25 persons (9 men and 16 women) have participated in this study. Data analysis after incorporating codes with similar characteristics revealed 14 subcategories and six main categories of the referral system, physicians’ accessibility, health data management, drug market, laws enforcement, and social protection. Some of the conditions of the healthcare system have given rise to polypharmacy in the elderly. In the absence of a comprehensive specialty and subspecialty referral system, patients may go to any physician office so may well be confused about numerous doctors' prescriptions. Electronic records not being prepared for the patients, failure to comply with laws, lack of robust enforcement for the existing laws and close surveillance are among the contributing factors. Inadequate insurance and supportive services are also evident. Age-specific care providing has not yet been institutionalized, while, inadequate specialist workforce playing a major role. So, one may not ignore the health system as contributing factor in designing effective interventions to fix the problem.

Keywords: elderly, polypharmacy, health system, qualitative study

Procedia PDF Downloads 151
268 Delisting Wave: Corporate Financial Distress, Institutional Investors Perception and Performance of South African Listed Firms

Authors: Adebiyi Sunday Adeyanju, Kola Benson Ajeigbe, Fortune Ganda

Abstract:

In the past three decades, there has been a notable increase in the number of firms delisting from the Johannesburg Stock Exchange (JSE) in South Africa. The recent increasing rate of delisting waves of corporate listed firms motivated this study. This study aims to explore the influence of institutional investor perceptions on the financial distress experienced by delisted firms within the South African market. The study further examined the impact of financial distress on the corporate performance of delisted firms. Using the data of delisted firms spanning from 2000 to 2023 and the FGLS (Feasible Generalized Least Squares) for the short run and PCSE (Panel-Corrected Standard Errors) for the long run effects of the relationship. The finding indicated that a decline in institutional investors’ perceptions was associated with the corporate financial distress of the delisted firms, particularly during the delisting year and the few years preceding the announcement of the delisting. This study addressed the importance of investor recognition in corporate financial distress and the delisting wave among listed firms- a finding supporting the stakeholder theory. This study is an insight for companies’ managements, investors, governments, policymakers, stockbrokers, lending institutions, bankers, the stock market, and other stakeholders in their various decision-making endeavours. Based on the above findings, it was recommended that corporate managements should improve their governance strategies that can help companies’ financial performances. Accountability and transparency through governance must also be improved upon with government support through the introduction of policies and strategies and enabling an easy environment that can help companies perform better.

Keywords: delisting wave, institutional investors, financial distress, corporate performance, investors’ perceptions

Procedia PDF Downloads 45
267 Bank, Stock Market Efficiency and Economic Growth: Lessons for ASEAN-5

Authors: Tan Swee Liang

Abstract:

This paper estimates bank and stock market efficiency associations with real per capita GDP growth by examining panel-data across three different regions using Panel-Corrected Standard Errors (PCSE) regression developed by Beck and Katz (1995). Data from five economies in ASEAN (Singapore, Malaysia, Thailand, Philippines, and Indonesia), five economies in Asia (Japan, China, Hong Kong SAR, South Korea, and India) and seven economies in OECD (Australia, Canada, Denmark, Norway, Sweden, United Kingdom U.K., and United States U.S.), between 1990 and 2017 are used. Empirical findings suggest one, for Asia-5 high bank net interest margin means greater bank profitability, hence spurring economic growth. Two, for OECD-7 low bank overhead costs (as a share of total assets) may reflect weak competition and weak investment in providing superior banking services, hence dampening economic growth. Three, stock market turnover ratio has negative association with OECD-7 economic growth, but a positive association with Asia-5, which suggest the relationship between liquidity and growth is ambiguous. Lastly, for ASEAN-5 high bank overhead costs (as a share of total assets) may suggest expenses have not been channelled efficiently to income generating activities. One practical implication of the findings is that policy makers should take necessary measures toward financial liberalisation policies that boost growth through the efficiency channel, so that funds are efficiently allocated through the financial system between financial and real sectors.

Keywords: financial development, banking system, capital markets, economic growth

Procedia PDF Downloads 139
266 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 169
265 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 125
264 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 143
263 Evaluation of the Grammar Questions at the Undergraduate Level

Authors: Preeti Gacche

Abstract:

A considerable part of undergraduate level English Examination papers is devoted to grammar. Hence the grammar questions in the question papers are evaluated and the opinions of both students and teachers about them are obtained and analyzed. A grammar test of 100 marks is administered to 43 students to check their performance. The question papers have been evaluated by 10 different teachers and their scores compared. The analysis of 38 University question papers reveals that on an average 20 percent marks are allotted to grammar. Almost all the grammar topics are tested. Abundant use of grammatical terminology is observed in the questions. Decontextualization, repetition, possibility of multiple correct answers and grammatical errors in framing the questions have been observed. Opinions of teachers and students about grammar questions vary in many respects. The students responses are analyzed medium-wise and sex-wise. The Medium at the School level and the sex of the students are found to play no role as far as interest in the study of grammar is concerned. English medium students solve grammar questions intuitively whereas non-English medium students are required to recollect the rules of grammar. Prepositions, Verbs, Articles and Model auxiliaries are found to be easy topics for most students whereas the use of conjunctions is the most difficult topic. Out of context items of grammar are difficult to answer in comparison with contextualized items of grammar. Hence contextualized texts to test grammar items are desirable. No formal training in setting questions is imparted to teachers by the competent authorities like the University. They need to be trained in testing. Statistically there is no significant change of score with the change in the rater in testing of grammar items. There is scope of future improvement. The question papers need to be evaluated and feedback needs to be obtained from students and teachers for future improvement.

Keywords: context, evaluation, grammar, tests

Procedia PDF Downloads 353
262 The Influence of Emotion on Numerical Estimation: A Drone Operators’ Context

Authors: Ludovic Fabre, Paola Melani, Patrick Lemaire

Abstract:

The goal of this study was to test whether and how emotions influence drone operators in estimation skills. The empirical study was run in the context of numerical estimation. Participants saw a two-digit number together with a collection of cars. They had to indicate whether the stimuli collection was larger or smaller than the number. The two-digit numbers ranged from 12 to 27, and collections included 3-36 cars. The presentation of the collections was dynamic (each car moved 30 deg. per second on the right). Half the collections were smaller collections (including fewer than 20 cars), and the other collections were larger collections (i.e., more than 20 cars). Splits between the number of cars in a collection and the two-digit number were either small (± 1 or 2 units; e.g., the collection included 17 cars and the two-digit number was 19) or larger (± 8 or 9 units; e.g., 17 cars and '9'). Half the collections included more items (and half fewer items) than the number indicated by the two-digit number. Before and after each trial, participants saw an image inducing negative emotions (e.g., mutilations) or neutral emotions (e.g., candle) selected from International Affective Picture System (IAPS). At the end of each trial, participants had to say if the second picture was the same as or different from the first. Results showed different effects of emotions on RTs and percent errors. Participants’ performance was modulated by emotions. They were slower on negative trials compared to the neutral trials, especially on the most difficult items. They errored more on small-split than on large-split problems. Moreover, participants highly overestimated the number of cars when in a negative emotional state. These findings suggest that emotions influence numerical estimation, that effects of emotion in estimation interact with stimuli characteristics. They have important implications for understanding the role of emotions on estimation skills, and more generally, on how emotions influence cognition.

Keywords: drone operators, emotion, numerical estimation, arithmetic

Procedia PDF Downloads 116