Search results for: recurrent errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1271

Search results for: recurrent errors

401 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS

Procedia PDF Downloads 171
400 Blockchain Technology for Secure and Transparent Oil and Gas Supply Chain Management

Authors: Gaurav Kumar Sinha

Abstract:

The oil and gas industry, characterized by its complex and global supply chains, faces significant challenges in ensuring security, transparency, and efficiency. Blockchain technology, with its decentralized and immutable ledger, offers a transformative solution to these issues. This paper explores the application of blockchain technology in the oil and gas supply chain, highlighting its potential to enhance data security, improve transparency, and streamline operations. By leveraging smart contracts, blockchain can automate and secure transactions, reducing the risk of fraud and errors. Additionally, the integration of blockchain with IoT devices enables real-time tracking and monitoring of assets, ensuring data accuracy and integrity throughout the supply chain. Case studies and pilot projects within the industry demonstrate the practical benefits and challenges of implementing blockchain solutions. The findings suggest that blockchain technology can significantly improve trust and collaboration among supply chain participants, ultimately leading to more efficient and resilient operations. This study provides valuable insights for industry stakeholders considering the adoption of blockchain technology to address their supply chain management challenges.

Keywords: blockchain technology, oil and gas supply chain, data security, transparency, smart contracts, IoT integration, real-time tracking, asset monitoring, fraud reduction, supply chain efficiency, data integrity, case studies, industry implementation, trust, collaboration.

Procedia PDF Downloads 18
399 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs

Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle

Abstract:

Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.

Keywords: meteorological data, Washington D.C., DCNet data, NAM model

Procedia PDF Downloads 218
398 Design and Implementation of PD-NN Controller Optimized Neural Networks for a Quad-Rotor

Authors: Chiraz Ben Jabeur, Hassene Seddik

Abstract:

In this paper, a full approach of modeling and control of a four-rotor unmanned air vehicle (UAV), known as quad-rotor aircraft, is presented. In fact, a PD and a PD optimized Neural Networks Approaches (PD-NN) are developed to be applied to control a quad-rotor. The goal of this work is to concept a smart self-tuning PD controller based on neural networks able to supervise the quad-rotor for an optimized behavior while tracking the desired trajectory. Many challenges could arise if the quad-rotor is navigating in hostile environments presenting irregular disturbances in the form of wind added to the model on each axis. Thus, the quad-rotor is subject to three-dimensional unknown static/varying wind disturbances. The quad-rotor has to quickly perform tasks while ensuring stability and accuracy and must behave rapidly with regard to decision-making facing disturbances. This technique offers some advantages over conventional control methods such as PD controller. Simulation results are obtained with the use of Matlab/Simulink environment and are founded on a comparative study between PD and PD-NN controllers based on wind disturbances. These later are applied with several degrees of strength to test the quad-rotor behavior. These simulation results are satisfactory and have demonstrated the effectiveness of the proposed PD-NN approach. In fact, this controller has relatively smaller errors than the PD controller and has a better capability to reject disturbances. In addition, it has proven to be highly robust and efficient, facing turbulences in the form of wind disturbances.

Keywords: hostile environment, PD and PD-NN controllers, quad-rotor control, robustness against disturbance

Procedia PDF Downloads 117
397 Evaluation of Existence of Antithyroid Antibodies, Anti-Thyroid Peroxidase and Anti-Thyroglobulin in Patients with Hepatitis C Viral Infections

Authors: Junaid Mahmood Alam, Sana Anwar, Sarah Sughra Asghar

Abstract:

Chronic hepatitis or Hepatitis C viral (HCV) infection has been identified as one of the factors that could elicit autoimmune disease resulting in the development of auto-antibodies. Furthermore, HCV is implicated in contravening of forbearance to antigens, therefore, inciting auto-reactivity. In this regard, several near and past studies noted the prevalence of thyroid dysfunction and production of anti-thyroid antibodies (ATAb) such as anti-thyroid peroxidase (AntiTPO) and anti-thyroglobulin (AntiTG) in patients with HCV. Likewise, one of the etiologies of augmentation of thyroid disease is basically interferon therapy for HCV infections, for which a number of autoimmune diseases have been noted including Grave’s disease, Hishimoto thyroiditis. A prospectively case-control study was therefore carried out at department of clinical biochemistry lab services and chemical pathology in collaboration with department of clinical microbiology, at Liaquat National Hospital and Medical College, Karachi Pakistan for the period January 2015 to December 2017. Two control groups were inducted for comparison purpose, control group 1 = without HCV infection and with thyroid disorders (n = 20), control group 2 = with HCV infection and without thyroid disorders (n = 20), whereas HCV infected were n = 40 where more than half were noted to be positive for either of HCV IgG and Ag. In HCV group, patients with existing sub-clinical hypothyroidism and clinical hyperthyroidism were less than 5%. Analysis showed the presence of AntiTG in 12 HCV patients (30%), AntiTPO in 15 (37.5%) and both AntiTG and antiTPO in 10 patients (25%). Only 3 patients were found with the history of anti-thyroid auto-antibodies (7.5%) and one with parents and relatives with auto-immune disorders (2.5%). Patients that remained untreated were 12 (30%), under treatment 18 (45%) and with complete-course of treatment 10 (25%). As per review of the literature, meta-analysis of evident data and cross-sectional studies of selective cohorts (as studied in presented research), thyroid connection is designated as one of the most recurrent endocrine ailment associated with chronic HCV infection. Moreover, it also represents an extrahepatic disease in the continuum of HCV syndrome. In conclusion, HCV patients were more likely to encompass thyroid disorders especially related to development of either of ATAb or both antiTG and AntiTPO.

Keywords: Hepatitis C viral (HCV) infection, anti-thyroid antibodies, anti-thyroid peroxidase antibodies, anti-thyroglobulin antibodies

Procedia PDF Downloads 141
396 A Spatial Approach to Model Mortality Rates

Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang

Abstract:

Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.

Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection

Procedia PDF Downloads 158
395 Improving Second Language Speaking Skills via Video Exchange

Authors: Nami Takase

Abstract:

Computer-mediated-communication allows people to connect and interact with each other as if they were sharing the same space. The current study examined the effects of using video letters (VLs) on the development of second language speaking skills of Common European Framework of Reference for Languages (CEFR) A1 and CEFR B2 level learners of English as a foreign language. Two groups were formed to measure the impact of VLs. The experimental and control groups were given the same topic, and both groups worked with a native English-speaking university student from the United States of America. Students in the experimental group exchanged VLs, and students in the control group used video conferencing. Pre- and post-tests were conducted to examine the effects of each practice mode. The transcribed speech-text data showed that the VL group had improved speech accuracy scores, while the video conferencing group had increased sentence complexity scores. The use of VLs may be more effective for beginner-level learners because they are able to notice their own errors and replay videos to better understand the native speaker’s speech at their own pace. Both the VL and video conferencing groups provided positive feedback regarding their interactions with native speakers. The results showed how different types of computer-mediated communication impacts different areas of language learning and speaking practice and how each of these types of online communication tool is suited to different teaching objectives.

Keywords: computer-assisted-language-learning, computer-mediated-communication, english as a foreign language, speaking

Procedia PDF Downloads 87
394 Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis

Authors: Indira Kothalanka

Abstract:

For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu.

Keywords: cleft palate, compensatory articulation, spectrographic analysis, PRAAT

Procedia PDF Downloads 428
393 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 126
392 Merits and Demerits of Participation of Fellow Examinee as Subjects in Observed Structured Practical Examination in Physiology

Authors: Mohammad U. A. Khan, Md. D. Hossain

Abstract:

Background: Department of Physiology finds difficulty in managing ‘subjects’ in practical procedure. To avoid this difficulty fellow examinees of other group may be used as subjects. Objective: To find out the merits and demerits of using fellow examinees as subjects in the practical procedure. Method: This cross-sectional descriptive study was conducted in the Department of Physiology, Noakhali Medical College, Bangladesh during May-June’14. Forty-two 1st year undergraduate medical students from a selected public medical college of Bangladesh were enrolled for the study purposively. Consent of students and authority was taken. Eighteen of them were selected as subjects and designated as subject-examinees. Other fellow examinees (non-subject) examined their blood pressure and pulse as part of ‘observed structured practical examination’ (OSPE). The opinion of all examinees regarding the merits and demerits of using fellow examinee as subjects in the practical procedure was recorded. Result: Examinees stated that they could perform their practical procedure without nervousness (24/42, 57.14%), accurately and comfortably (14/42, 33.33%) and subjects were made available without wasting time (2/42, 4.76%). Nineteen students (45.24%) found no disadvantage and 2 (4.76%) felt embracing when the subject was of opposite sex. The subject-examinees narrated that they could learn from the errors done by their fellow examinee (11/18, 61.1%). 75% non-subject examinees expressed their willingness to be subject so that they can learn from their fellows’ error. Conclusion: Using fellow examinees as subjects is beneficial for both the non-subject and subject examinees. Funding sources: Navana, Beximco, Unihealth, Square & Acme Pharma, Bangladesh Ltd.

Keywords: physiology, teaching, practical, OSPE

Procedia PDF Downloads 135
391 Comparative Evaluation of Pharmacologically Guided Approaches (PGA) to Determine Maximum Recommended Starting Dose (MRSD) of Monoclonal Antibodies for First Clinical Trial

Authors: Ibraheem Husain, Abul Kalam Najmi, Karishma Chester

Abstract:

First-in-human (FIH) studies are a critical step in clinical development of any molecule that has shown therapeutic promise in preclinical evaluations, since preclinical research and safety studies into clinical development is a crucial step for successful development of monoclonal antibodies for guidance in pharmaceutical industry for the treatment of human diseases. Therefore, comparison between USFDA and nine pharmacologically guided approaches (PGA) (simple allometry, maximum life span potential, brain weight, rule of exponent (ROE), two species methods and one species methods) were made to determine maximum recommended starting dose (MRSD) for first in human clinical trials using four drugs namely Denosumab, Bevacizumab, Anakinra and Omalizumab. In our study, the predicted pharmacokinetic (pk) parameters and the estimated first-in-human dose of antibodies were compared with the observed human values. The study indicated that the clearance and volume of distribution of antibodies can be predicted with reasonable accuracy in human and a good estimate of first human dose can be obtained from the predicted human clearance and volume of distribution. A pictorial method evaluation chart was also developed based on fold errors for simultaneous evaluation of various methods.

Keywords: clinical pharmacology (CPH), clinical research (CRE), clinical trials (CTR), maximum recommended starting dose (MRSD), clearance and volume of distribution

Procedia PDF Downloads 361
390 A Gradient Orientation Based Efficient Linear Interpolation Method

Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar

Abstract:

This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.

Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing

Procedia PDF Downloads 249
389 Foot Self-Monitoring Knowledge, Attitude, Practice, and Related Factors among Diabetic Patients: A Descriptive and Correlational Study in a Taiwan Teaching Hospital

Authors: Li-Ching Lin, Yu-Tzu Dai

Abstract:

Recurrent foot ulcers or foot amputation have a major impact on patients with diabetes mellitus (DM), medical professionals, and society. A critical procedure for foot care is foot self-monitoring. Medical professionals’ understanding of patients’ foot self-monitoring knowledge, attitude, and practice is beneficial for raising patients’ disease awareness. This study investigated these and related factors among patients with DM through a descriptive study of the correlations. A scale for measuring the foot self-monitoring knowledge, attitude, and practice of patients with DM was used. Purposive sampling was adopted, and 100 samples were collected from the respondents’ self-reports or from interviews. The statistical methods employed were an independent-sample t-test, one-way analysis of variance, Pearson correlation coefficient, and multivariate regression analysis. The findings were as follows: the respondents scored an average of 12.97 on foot self-monitoring knowledge, and the correct answer rate was 68.26%. The respondents performed relatively lower in foot health screenings and recording, and awareness of neuropathy in the foot. The respondents held a positive attitude toward self-monitoring their feet and a negative attitude toward having others check the soles of their feet. The respondents scored an average of 12.64 on foot self-monitoring practice. Their scores were lower in their frequency of self-monitoring their feet, recording their self-monitoring results, checking their pedal pulse, and examining if their soles were red immediately after taking off their shoes. Significant positive correlations were observed among foot self-monitoring knowledge, attitude, and practice. The correlation coefficient between self-monitoring knowledge and self-monitoring practice was 0.20, and that between self-monitoring attitude and self-monitoring practice was 0.44. Stepwise regression analysis revealed that the main predictive factors of the foot self-monitoring practice in patients with DM were foot self-monitoring attitude, prior experience in foot care, and an educational attainment of college or higher. These factors predicted 33% of the variance. This study concludes that patients with DM lacked foot self-monitoring practice and advises that the patients’ self-monitoring abilities be evaluated first, including whether patients have poor eyesight, difficulties in bending forward due to obesity, and people who can assist them in self-monitoring. In addition, patient education should emphasize self-monitoring knowledge and practice, such as perceptions regarding the symptoms of foot neurovascular lesions, pulse monitoring methods, and new foot self-monitoring equipment. By doing so, new or recurring ulcers may be discovered in their early stages.

Keywords: diabetic foot, foot self-monitoring attitude, foot self-monitoring knowledge, foot self-monitoring practice

Procedia PDF Downloads 181
388 Automated User Story Driven Approach for Web-Based Functional Testing

Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam

Abstract:

Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors.  In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template.  We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE.  We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators.  Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.

Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing

Procedia PDF Downloads 370
387 The Development of Documentary Filmmaking in Early Independent India

Authors: Camille Deprez

Abstract:

This paper proposes to present research findings of an ongoing Hong Kong government-funded project on ‘The Documentary Film in India (1948-1975)’ (GRF 1240314), for which an extensive research fieldwork has been carried out in various archives in India. This project investigates the role and significance of the Indian documentary film sector from the inauguration of the state-sponsored Films Division one year after independence in 1948 until the declaration of a ‘State of Emergency’ in 1975. The documentary film production of this first period of national independence was characterised by increasing formal experimentation and analytical social and political enquiry, and by a complex, mixed structure of state-sponsored monopoly and free-market operation. However, that production remains significantly under-researched. What were the main production, distribution and exhibition strategies over this period? What were the recurrent themes and stylistic features of the films produced? In the new context of national independence (in which the State considered film as means of mass persuasion), consolidation of the commercial film, and the emergence of television and art cinema, what role did official, professional and creative factors play in the development of the documentary film sector? What were the impact of such films and the challenges faced by the documentary film in India? Based upon the crossed-analysis of primary written research documents, interviews and relevant films, this study interweaves empirical study of the sector's financing, production, distribution and exhibition strategies, as well as the films' content and form, with the larger historical context of India over the period from 1948 to 1975. Whilst most of the films made within the sector explored social issues, they were rarely able to do so from an overtly critical perspective. However, this paper proposes to analyse the contribution of important filmmakers and producers, including Ezra Mir, Paul Zils, Jean Bhownagary, S. Sukhdev, S. N. S. Sastri, and P. Pati, to the development of the Indian documentary film sector and style within and outside the remits of Films Division. It will more specifically assess the extent to which they criticised the State, showed the inequalities in Indian society and explored film form.

Keywords: documentary film, film archives, film history, India

Procedia PDF Downloads 280
386 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment

Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen

Abstract:

The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.

Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome

Procedia PDF Downloads 166
385 Easymodel: Web-based Bioinformatics Software for Protein Modeling Based on Modeller

Authors: Alireza Dantism

Abstract:

Presently, describing the function of a protein sequence is one of the most common problems in biology. Usually, this problem can be facilitated by studying the three-dimensional structure of proteins. In the absence of a protein structure, comparative modeling often provides a useful three-dimensional model of the protein that is dependent on at least one known protein structure. Comparative modeling predicts the three-dimensional structure of a given protein sequence (target) mainly based on its alignment with one or more proteins of known structure (templates). Comparative modeling consists of four main steps 1. Similarity between the target sequence and at least one known template structure 2. Alignment of target sequence and template(s) 3. Build a model based on alignment with the selected template(s). 4. Prediction of model errors 5. Optimization of the built model There are many computer programs and web servers that automate the comparative modeling process. One of the most important advantages of these servers is that it makes comparative modeling available to both experts and non-experts, and they can easily do their own modeling without the need for programming knowledge, but some other experts prefer using programming knowledge and do their modeling manually because by doing this they can maximize the accuracy of their modeling. In this study, a web-based tool has been designed to predict the tertiary structure of proteins using PHP and Python programming languages. This tool is called EasyModel. EasyModel can receive, according to the user's inputs, the desired unknown sequence (which we know as the target) in this study, the protein sequence file (template), etc., which also has a percentage of similarity with the primary sequence, and its third structure Predict the unknown sequence and present the results in the form of graphs and constructed protein files.

Keywords: structural bioinformatics, protein tertiary structure prediction, modeling, comparative modeling, modeller

Procedia PDF Downloads 76
384 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building

Authors: Aaditya U. Jhamb

Abstract:

Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.

Keywords: energy efficient buildings, heating load, cooling load, machine learning models

Procedia PDF Downloads 80
383 The Influence of Alvar Aalto on the Early Work of Álvaro Siza

Authors: Eduardo Jorge Cabral dos Santos Fernandes

Abstract:

The expression ‘Porto School’, usually associated with an educational institution, the School of Fine Arts of Porto, is applied for the first time with the sense of an architectural trend by Nuno Portas in a text published in 1983. The expression is used to characterize a set of works by Porto architects, in which common elements are found, namely the desire to reuse languages and forms of the German and Dutch rationalism of the twenties, using the work of Alvar Aalto as a mediation for the reinterpretation of these models. In the same year, Álvaro Siza classifies the Finnish architect as a miscegenation agent who transforms experienced models and introduces them to different realities in a text published in Jornal de Letras, Artes e Ideias. The influence of foreign models and their adaptation to the context has been a recurrent theme in Portuguese architecture, which finds important contributions in the writings of Alexandre Alves Costa, at this time. However, the identification of these characteristics in Siza’s work is not limited to the Portuguese theoretical production: it is the recognition of this attitude towards the context that leads Kenneth Frampton to include Siza in the restricted group of architects who embody Critical Regionalism (in his book Modern architecture: a critical history). For Frampton, his work focuses on the territory and on the consequences of the intervention in the context, viewing architecture as a tectonic fact rather than a series of scenographic episodes and emphasizing site-specific aspects (topography, light, climate). Therefore, the motto of this paper is the dichotomous opposition between foreign influences and adaptation to the context in the early work of Álvaro Siza (designed in the sixties) in which the influence (theoretical, methodological, and formal) of Alvar Aalto manifests itself in the form and the language: the pool at Quinta da Conceição, the Seaside Pools and the Tea House (three works in Leça da Palmeira) and the Lordelo Cooperative (in Porto). This work is part of a more comprehensive project, which considers several case studies throughout the Portuguese architect's vast career, built in Portugal and abroad, in order to obtain a holistic view.

Keywords: Alvar Aalto, Álvaro Siza, foreign influences, adaptation to the context

Procedia PDF Downloads 7
382 Assessment of Factors Influencing Adoption of Agroforestry Technologies in Halaba Special Woreda, Southern Ethiopia

Authors: Mihretu Erjabo

Abstract:

Halaba special district is characterized by drought, soil erosion, high population pressure, poor livestock production, lack of feed for livestock, very deep water table, very low productivity of crops and food insufficiency. In order to address these problems, the woreda agricultural development office along with other management practices such as soil physical conservation measures agroforestry was introduced decades ago as a means to alleviate the problem. However, the level of agroforestry adoption remains low. Objective of this study was to identify the factors that influence adoption of agroforestry technologies by farmers in the district. Random sampling was employed to select two kebele administrations and respondents. Data collection was conducted by rural household questionnaire survey, participatory rural appraisal, questionnaires for local and woreda extension staff, secondary data resources and field observation. A sample of 12 key informants, 6 extension staffs, and 182 households, were used in the data collection. Chi square test used to determine significant relationships between adoption of agroforestry and 15 selected variables. Out of which eleven were found to be significant to affect farmers’ adoptiveness. These were frequency of visits of farmers (13.39%), participation in training (11.49%), farmers’ attitude towards agroforestry practices (10.61%), frequency of visits of extensionists (10.38%), participation in extension meeting (10.34%), participation in field day (10.28%), land holding size (9.29%), level of literacy (8.78%), awareness about the importance of agroforestry technology packages (7.06%), time taken from their residence to nearest extension (5.04%) and gender of respondents (3.34%). This study also identified various factors that result in low adoption rates of agroforestry including fear of competition, seedling, rainfall and labour shortage, free grazing, financial problem, expecting trees as soil degrader and long span of trees and lack of need ranking. To improve farmers’ adoption, the factors identified should be well addressed by launching a series and recurrent outreach extension program appropriate and suitable to farmers need.

Keywords: farmers attitude, farmers participation, soil degradation, technology packages

Procedia PDF Downloads 142
381 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 39
380 Analytical Development of a Failure Limit and Iso-Uplift Curves for Eccentrically Loaded Shallow Foundations

Authors: N. Abbas, S. Lagomarsino, S. Cattari

Abstract:

Examining existing experimental results for shallow rigid foundations subjected to vertical centric load (N), accompanied or not with a bending moment (M), two main non-linear mechanisms governing the cyclic ‎response of the soil-foundation system can be distinguished: foundation uplift and soil yielding. A soil-foundation failure limit, is defined as a domain of resistance in the two dimensional (2D) load space (N, M) inside of which lie all the admissible combinations of loads; these latter correspond to a pure elastic, non-linear elastic or plastic behavior of the soil-foundation system, while the points lying on the failure limit correspond to a combination of loads leading to a failure of the soil-foundation system. In this study, the proposed resistance domain is constructed analytically based on mechanics. Original elastic limit, uplift initiation ‎limit and iso-uplift limits are constructed inside this domain. These limits give a prediction ‎of the mechanisms activated for each combination of loads applied to the ‎foundation. A comparison of the proposed failure limit with experimental tests existing in the literature shows interesting results. Also, the developed uplift initiation limit and iso-uplift curves are confronted with others already proposed in the literature and widely used due to the absence of other alternatives, and remarkable differences are noted, showing evident errors in the past proposals and relevant accuracy for those given in the present work.

Keywords: foundation uplift, iso-uplift curves, resistance domain, soil yield

Procedia PDF Downloads 372
379 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm

Authors: Muhammad Bilal, Zhongfeng Qiu

Abstract:

Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.

Keywords: AEORNET, AOD, SARA, GOCI, Beijing

Procedia PDF Downloads 153
378 Ambivalence as Ethical Practice: Methodologies to Address Noise, Bias in Care, and Contact Evaluations

Authors: Anthony Townsend, Robyn Fasser

Abstract:

While complete objectivity is a desirable scientific position from which to conduct a care and contact evaluation (CCE), it is precisely the recognition that we are inherently incapable of operating objectively that is the foundation of ethical practice and skilled assessment. Drawing upon recent research from Daniel Kahneman (2021) on the differences between noise and bias, as well as different inherent biases collectively termed “The Elephant in the Brain” by Kevin Simler and Robin Hanson (2019) from Oxford University, this presentation addresses both the various ways in which our judgments, perceptions and even procedures can be distorted and contaminated while conducting a CCE, but also considers the value of second order cybernetics and the psychodynamic concept of ‘ambivalence’ as a conceptual basis to inform our assessment methodologies to limit such errors or at least better identify them. Both a conceptual framework for ambivalence, our higher-order capacity to allow for the convergence and consideration of multiple emotional experiences and cognitive perceptions to inform our reasoning, and a practical methodology for assessment relying on data triangulation, Bayesian inference and hypothesis testing is presented as a means of promoting ethical practice for health care professionals conducting CCEs. An emphasis on widening awareness and perspective, limiting ‘splitting’, is demonstrated both in how this form of emotional processing plays out in alienating dynamics in families as well as the assessment thereof. In addressing this concept, this presentation aims to illuminate the value of ambivalence as foundational to ethical practice for assessors.

Keywords: ambivalence, forensic, psychology, noise, bias, ethics

Procedia PDF Downloads 75
377 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: computer-aided system, detection, image segmentation, morphology

Procedia PDF Downloads 133
376 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 683
375 Internal Audit and the Effectiveness and Efficiency of Operations in Hospitals

Authors: Naziru Suleiman

Abstract:

The ever increasing cases of financial frauds and corporate accounting scandals in recent years have raised more concern on the operation of internal control mechanisms and performance of the internal audit departments in organizations. In most cases the seeming presence of both the internal control system and internal audit in organizations do not prove useful as frauds errors and irregularities are being perpetuated. The aim of this study, therefore, is to assess the role of internal audit in achieving the objectives of internal control system of federal hospitals in Kano State from the perception of the respondents. The study used survey research design and generated data from primary source by means of questionnaire. A total number of 100 copies of questionnaire were administered out of which 68 were duly completed and returned. Cronbach’s alpha was used to test the internal validity of the various items in the constructs. Descriptive statistics, chi-square test, Mann Whitney U test and Kruskal Wallis ANOVA were employed for the analysis of data. The study finds that from the perception of the respondents, internal audit departments in Federal Hospitals in Kano State are effective and that they contribute positively to the overall attainment of the objectives of internal control system of these hospitals. There is no significant difference found on the views of the respondents from the three hospitals. Hence, the study concludes that strong and functional internal audit department is a basic requirement for effectiveness of operations of the internal control system. In the light of the findings, it is recommended that internal audit should continue to ensure that the objectives of internal control system of these hospitals are achieved through proper and adequate evaluation and review of the system.

Keywords: internal audit, internal control, federal hospitals, financial frauds

Procedia PDF Downloads 334
374 Design of an Improved Distributed Framework for Intrusion Detection System Based on Artificial Immune System and Neural Network

Authors: Yulin Rao, Zhixuan Li, Burra Venkata Durga Kumar

Abstract:

Intrusion detection refers to monitoring the actions of internal and external intruders on the system and detecting the behaviours that violate security policies in real-time. In intrusion detection, there has been much discussion about the application of neural network technology and artificial immune system (AIS). However, many solutions use static methods (signature-based and stateful protocol analysis) or centralized intrusion detection systems (CIDS), which are unsuitable for real-time intrusion detection systems that need to process large amounts of data and detect unknown intrusions. This article proposes a framework for a distributed intrusion detection system (DIDS) with multi-agents based on the concept of AIS and neural network technology to detect anomalies and intrusions. In this framework, multiple agents are assigned to each host and work together, improving the system's detection efficiency and robustness. The trainer agent in the central server of the framework uses the artificial neural network (ANN) rather than the negative selection algorithm of AIS to generate mature detectors. Mature detectors can distinguish between self-files and non-self-files after learning. Our analyzer agents use genetic algorithms to generate memory cell detectors. This kind of detector will effectively reduce false positive and false negative errors and act quickly on known intrusions.

Keywords: artificial immune system, distributed artificial intelligence, multi-agent, intrusion detection system, neural network

Procedia PDF Downloads 97
373 Corrective Feedback and Uptake Patterns in English Speaking Lessons at Hanoi Law University

Authors: Nhac Thanh Huong

Abstract:

New teaching methods have led to the changes in the teachers’ roles in an English class, in which teachers’ error correction is an integral part. Language error and corrective feedback have been the interest of many researchers in foreign language teaching. However, the techniques and the effectiveness of teachers’ feedback have been a question of much controversy. This present case study has been carried out with a view to finding out the patterns of teachers’ corrective feedback and their impact on students’ uptake in English speaking lessons of legal English major students at Hanoi Law University. In order to achieve those aims, the study makes use of classroom observations as the main method of data collection to seeks answers to the two following questions: 1. What patterns of corrective feedback occur in English speaking lessons for second- year legal English major students in Hanoi Law University?; 2. To what extent does that corrective feedback lead to students’ uptake? The study provided some important findings, among which was a close relationship between corrective feedback and uptake. In particular, recast was the most commonly used feedback type, yet it was the least effective in terms of students’ uptake and repair, while the most successful feedback, namely meta-linguistic feedback, clarification requests and elicitation, which led to students’ generated repair, was used at a much lower rate by teachers. Furthermore, it revealed that different types of errors needed different types of feedback. Also, the use of feedback depended on the students’ English proficiency level. In the light of findings, a number of pedagogical implications have been drawn in the hope of enhancing the effectiveness of teachers’ corrective feedback to students’ uptake in foreign language acquisition process.

Keywords: corrective feedback, error, uptake, speaking English lesson

Procedia PDF Downloads 242
372 Approaching In vivo Dosimetry for Kilovoltage X-Ray Radiotherapy

Authors: Rodolfo Alfonso, David Alonso, Albin Garcia, Jose Luis Alonso

Abstract:

Recently a new kilovoltage radiotherapy unit model Xstrahl 200 - donated to the INOR´s Department of Radiotherapy (DR-INOR) in the framework of a IAEA's technical cooperation project- has been commissioned. This unit is able to treat shallow and low deep laying lesions, as it provides 8 discrete beam qualities, from 40 to 200 kV. As part of the patient-specific quality assurance program established at DR-INOR for external beam radiotherapy, it has been recommended to implement in vivo dose measurements (IVD), as they allow effectively discovering eventual errors or failures in the radiotherapy process. For that purpose a radio-photoluminescence (RPL) dosimetry system, model XXX, -also donated to DR-INOR by the same IAEA project- has been studied and commissioned. Main dosimetric parameters of the RPL system, such as reproducibility, linearity, and filed size influence were assessed. In a similar way, the response of radiochromic EBT3 type film was investigated for purposes of IVD. Both systems were calibrated in terms of entrance surface dose. Results of the dosimetric commissioning of RPL and EBT3 for IVD, and their pre-clinical implementation through end-to-end test cases are presented. The RPL dosimetry seems more recommendable for hyper-fractionated schemes with larger fields and curved patient contours, as those in chest wall irradiations, where the use of more than one dosimeter could be required. The radiochromic system involves smaller corrections with field size, but it sensibility is lower; hence it is more adequate for hypo-fractionated treatments with smaller fields.

Keywords: glass dosimetry, in vivo dosimetry, kilovotage radiotherapy, radiochromic dosimetry

Procedia PDF Downloads 381