Search results for: measurement accuracy
5495 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy
Authors: Wenhao Lan, Ning Li, Qiang Tong
Abstract:
To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.Keywords: mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB
Procedia PDF Downloads 1515494 Online Handwritten Character Recognition for South Indian Scripts Using Support Vector Machines
Authors: Steffy Maria Joseph, Abdu Rahiman V, Abdul Hameed K. M.
Abstract:
Online handwritten character recognition is a challenging field in Artificial Intelligence. The classification success rate of current techniques decreases when the dataset involves similarity and complexity in stroke styles, number of strokes and stroke characteristics variations. Malayalam is a complex south indian language spoken by about 35 million people especially in Kerala and Lakshadweep islands. In this paper, we consider the significant feature extraction for the similar stroke styles of Malayalam. This extracted feature set are suitable for the recognition of other handwritten south indian languages like Tamil, Telugu and Kannada. A classification scheme based on support vector machines (SVM) is proposed to improve the accuracy in classification and recognition of online malayalam handwritten characters. SVM Classifiers are the best for real world applications. The contribution of various features towards the accuracy in recognition is analysed. Performance for different kernels of SVM are also studied. A graphical user interface has developed for reading and displaying the character. Different writing styles are taken for each of the 44 alphabets. Various features are extracted and used for classification after the preprocessing of input data samples. Highest recognition accuracy of 97% is obtained experimentally at the best feature combination with polynomial kernel in SVM.Keywords: SVM, matlab, malayalam, South Indian scripts, onlinehandwritten character recognition
Procedia PDF Downloads 5765493 Difference Between Planning Target Volume (PTV) Based Slow-Ct and Internal Target Volume (ITV) Based 4DCT Imaging Techniques in Stereotactic Body Radiotherapy for Lung Cancer: A Comparative Study
Authors: Madhumita Sahu, S. S. Tiwary
Abstract:
The Radiotherapy of Carcinoma Lung has always been difficult and a matter of great concern. The significant movement due to fractional motion caused due to non-rhythmic respiratory motion poses a great challenge for the treatment of Lung cancer using Ionizing Radiation. The present study compares the accuracy in the measurement of Target Volume using Slow-CT and 4DCT Imaging in SBRT for Lung Tumor. The experimental samples were extracted from patients with Lung Cancer who underwent SBRT. Slow-CT and 4DCT images were acquired under free breathing for each patient. PTV were delineated on Slow CT images. Similarly, ITV was also delineated on each of the 4DCT volumes. Volumetric and Statistical analysis were performed for each patient by measuring corresponding PTV and ITV volumes. The study showed (1) The Maximum Deviation observed between Slow-CT-based PTV and 4DCT imaging-based ITV is 248.58 cc. (2) The Minimum Deviation observed between Slow-CT-based PTV and 4DCT imaging-based ITV is 5.22 cc. (3) The Mean Deviation observed between Slow-CT-based PTV and 4DCT imaging-based ITV is 63.21 cc. The present study concludes that irradiated volume ITV with 4DCT is less as compared to the PTV with Slow-CT. A better and more precise treatment could be given more accurately with 4DCT Imaging by sparing 63.21 CC of mean body volume.Keywords: CT imaging, 4DCT imaging, lung cancer, statistical analysis
Procedia PDF Downloads 255492 Urban Design via Estimation Model for Traffic Index of Cities Based on an Artificial Intelligence
Authors: Seyed Sobhan Alvani, Mohammad Gohari
Abstract:
By developing cities and increasing the population, traffic congestion has become a vital problem. Due to this crisis, urban designers try to present solutions to decrease this difficulty. On the other hand, predicting the model with perfect accuracy is essential for solution-providing. The current study presents a model based on artificial intelligence which can predict traffic index based on city population, growth rate, and area. The accuracy of the model was evaluated, which is acceptable and it is around 90%. Thus, urban designers and planners can employ it for predicting traffic index in the future to provide strategies.Keywords: traffic index, population growth rate, cities wideness, artificial neural network
Procedia PDF Downloads 445491 The Impact of Grammatical Differences on English-Mandarin Chinese Simultaneous Interpreting
Authors: Miao Sabrina Wang
Abstract:
This paper examines the impact of grammatical differences on simultaneous interpreting from English into Mandarin Chinese by drawing upon an empirical study of professional and student interpreters. The research focuses on the effects of three grammatical categories including passives, adverbial components and noun phrases on simultaneous interpreting. For each category, interpretations of instances in which the grammatical structures are the same across the two languages are compared with interpretations of instances in which the grammatical structures differ across the two languages in terms of content accuracy and delivery appropriateness. The results indicate that grammatical differences have a significant impact on the interpreting performance of both professionals and students.Keywords: content accuracy, delivery appropriateness, grammatical differences, simultaneous interpreting
Procedia PDF Downloads 5425490 Assessing Perinatal Mental Illness during the COVID-19 Pandemic: A Review of Measurement Tools
Authors: Mya Achike
Abstract:
Background and Significance: Perinatal mental illness covers a wide range of conditions and has a huge influence on maternal-child health. Issues and challenges with perinatal mental health have been associated with poor pregnancy, birth, and postpartum outcomes. It is estimated that one out of five new and expectant mothers experience some degree of perinatal mental illness, which makes this a hugely significant health outcome. Certain factors increase the maternal risk for mental illness. Challenges related to poverty, migration, extreme stress, exposure to violence, emergency and conflict situations, natural disasters, and pandemics can exacerbate mental health disorders. It is widely expected that perinatal mental health is being negatively affected during the present COVID-19 pandemic. Methods: A review of studies that reported a measurement tool to assess perinatal mental health outcomes during the COVID-19 pandemic was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. PubMed, CINAHL, and Google Scholar were used to search for peer-reviewed studies published after late 2019, in accordance with the emergence of the virus. The search resulted in the inclusion of ten studies. Approach to measure health outcome: The main approach to measure perinatal mental illness is the use of self-administered, validated questionnaires, usually in the clinical setting. Summary: Widespread use of these tools has afforded the clinical and research communities the ability to identify and support women who may be suffering from mental illness disorders during a pandemic. More research is needed to validate tools in other vulnerable, perinatal populations.Keywords: mental health during covid, perinatal mental health, perinatal mental health measurement tools, perinatal mental health tools
Procedia PDF Downloads 1355489 Improving Short-Term Forecast of Solar Irradiance
Authors: Kwa-Sur Tam, Byung O. Kang
Abstract:
By using different ranges of daily sky clearness index defined in this paper, any day can be classified as a clear sky day, a partly cloudy day or a cloudy day. This paper demonstrates how short-term forecasting of solar irradiation can be improved by taking into consideration the type of day so defined. The source of day type dependency has been identified. Forecasting methods that take into consideration of day type have been developed and their efficacy have been established. While all methods that implement some form of adjustment to the cloud cover forecast provided by the U.S. National Weather Service provide accuracy improvement, methods that incorporate day type dependency provides even further improvement in forecast accuracy.Keywords: day types, forecast methods, National Weather Service, sky cover, solar energy
Procedia PDF Downloads 4665488 GRCNN: Graph Recognition Convolutional Neural Network for Synthesizing Programs from Flow Charts
Authors: Lin Cheng, Zijiang Yang
Abstract:
Program synthesis is the task to automatically generate programs based on user specification. In this paper, we present a framework that synthesizes programs from flow charts that serve as accurate and intuitive specification. In order doing so, we propose a deep neural network called GRCNN that recognizes graph structure from its image. GRCNN is trained end-to-end, which can predict edge and node information of the flow chart simultaneously. Experiments show that the accuracy rate to synthesize a program is 66.4%, and the accuracy rates to recognize edge and node are 94.1% and 67.9%, respectively. On average, it takes about 60 milliseconds to synthesize a program.Keywords: program synthesis, flow chart, specification, graph recognition, CNN
Procedia PDF Downloads 1205487 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent
Procedia PDF Downloads 1285486 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1515485 A Study on Improvement of Straightness of Preform Pulling Process of Hollow Pipe by Finete Element Analysis Method
Authors: Yeon-Jong Jeong, Jun-Hong Park, Hyuk Choi
Abstract:
In this study, we have studied the design of intermediate die in multipass drawing. Research has been continuously studied because of the advantage of better dimensional accuracy, smooth surface and improved mechanical properties in the case of drawing. Among them, multipass drawing, which is a method to realize complicated shape by drawing, was discussed in this study. The most important factor in the multipass drawing is the dimensional accuracy and simplify the process. To accomplish this, a multistage shape drawing was performed using various intermediate die shape designs, and finite element analysis was performed.Keywords: FEM (Finite Element Method), multipass drawing, intermediate die, hollow pipe
Procedia PDF Downloads 3165484 Spaces of Interpretation: Personal Space
Authors: Yehuda Roth
Abstract:
In quantum theory, a system’s time evolution is predictable unless an observer performs measurement, as the measurement process can randomize the system. This randomness appears when the measuring device does not accurately describe the measured item, i.e., when the states characterizing the measuring device appear as a superposition of those being measured. When such a mismatch occurs, the measured data randomly collapse into a single eigenstate of the measuring device. This scenario resembles the interpretation process in which the observer does not experience an objective reality but interprets it based on preliminary descriptions initially ingrained into his/her mind. This distinction is the motivation for the present study in which the collapse scenario is regarded as part of the interpretation process of the observer. By adopting the formalism of the quantum theory, we present a complete mathematical approach that describes the interpretation process. We demonstrate this process by applying the proposed interpretation formalism to the ambiguous image "My wife and mother-in-law" to identify whether a woman in the picture is young or old.Keywords: quantum-like interpretation, ambiguous image, determination, quantum-like collapse, classified representation
Procedia PDF Downloads 1055483 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 1705482 A Comprehensive Review of Adaptive Building Energy Management Systems Based on Users’ Feedback
Authors: P. Nafisi Poor, P. Javid
Abstract:
Over the past few years, the idea of adaptive buildings and specifically, adaptive building energy management systems (ABEMS) has become popular. Well-performed management in terms of energy is to create a balance between energy consumption and user comfort; therefore, in new energy management models, efficient energy consumption is not the sole factor and the user's comfortability is also considered in the calculations. One of the main ways of measuring this factor is by analyzing user feedback on the conditions to understand whether they are satisfied with conditions or not. This paper provides a comprehensive review of recent approaches towards energy management systems based on users' feedbacks and subsequently performs a comparison between them premised upon their efficiency and accuracy to understand which approaches were more accurate and which ones resulted in a more efficient way of minimizing energy consumption while maintaining users' comfortability. It was concluded that the highest accuracy rate among the presented works was 95% accuracy in determining satisfaction and up to 51.08% energy savings can be achieved without disturbing user’s comfort. Considering the growing interest in designing and developing adaptive buildings, these studies can support diverse inquiries about this subject and can be used as a resource to support studies and researches towards efficient energy consumption while maintaining the comfortability of users.Keywords: adaptive buildings, energy efficiency, intelligent buildings, user comfortability
Procedia PDF Downloads 1345481 Load Forecasting in Short-Term Including Meteorological Variables for Balearic Islands Paper
Authors: Carolina Senabre, Sergio Valero, Miguel Lopez, Antonio Gabaldon
Abstract:
This paper presents a comprehensive survey of the short-term load forecasting (STLF). Since the behavior of consumers and producers continue changing as new technologies, it is an ongoing process, and moreover, new policies become available. The results of a research study for the Spanish Transport System Operator (REE) is presented in this paper. It is presented the improvement of the forecasting accuracy in the Balearic Islands considering the introduction of meteorological variables, such as temperature to reduce forecasting error. Variables analyzed for the forecasting in terms of overall accuracy are cloudiness, solar radiation, and wind velocity. It has also been analyzed the type of days to be considered in the research.Keywords: short-term load forecasting, power demand, neural networks, load forecasting
Procedia PDF Downloads 1915480 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion
Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao
Abstract:
Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.Keywords: image classification, decision fusion, multi-temporal, remote sensing
Procedia PDF Downloads 1255479 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain
Authors: Engy S. El-Kayal, Mohamed M. S. Arafa
Abstract:
There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)
Procedia PDF Downloads 1935478 Alternating Current Photovoltaic Module Model
Authors: Irtaza M. Syed, Kaamran Raahemifar
Abstract:
This paper presents modeling of a Alternating Current (AC) Photovoltaic (PV) module using Matlab/Simulink. The proposed AC-PV module model is simple, realistic, and application oriented. The model is derived on module level as compared to cell level directly from the information provided by the manufacturer data sheet. DC-PV module, MPPT control, BC, VSI and LC filter, all were treated as a single unit. The model accounts for changes in variations of both irradiance and temperature. The AC-PV module proposed model is simulated and the results are compared with the datasheet projected numbers to validate model’s accuracy and effectiveness. Implementation and results demonstrate simplicity and accuracy, as well as reliability of the model.Keywords: PV modeling, AC PV Module, datasheet, VI curves irradiance, temperature, MPPT, Matlab/Simulink
Procedia PDF Downloads 5755477 Gender Recognition with Deep Belief Networks
Authors: Xiaoqi Jia, Qing Zhu, Hao Zhang, Su Yang
Abstract:
A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets.Keywords: gender recognition, beep belief net-works, semi-supervised learning, greedy-layer wise RBMs
Procedia PDF Downloads 4555476 Defence Ethics : A Performance Measurement Framework for the Defence Ethics Program
Authors: Allyson Dale, Max Hlywa
Abstract:
The Canadian public expects the highest moral standards from Canadian Armed Forces (CAF) members and Department of National Defence (DND) employees. The Chief, Professional Conduct and Culture (CPCC) stood up in April 2021 with the mission of ensuring that the defence culture and members’ conduct are aligned with the ethical principles and values that the organization aspires towards. The Defence Ethics Program (DEP), which stood up in 1997, is a values-based ethics program for individuals and organizations within the DND/CAF and now falls under CPCC. The DEP is divided into five key functional areas, including policy, communications, collaboration, training and education, and advice and guidance. The main focus of the DEP is to foster an ethical culture within defence so that members and organizations perform to the highest ethical standards. The measurement of organizational ethics is often complex and challenging. In order to monitor whether the DEP is achieving its intended outcomes, a performance measurement framework (PMF) was developed using the Director General Military Personnel Research and Analysis (DGMPRA) PMF development process. This evidence-based process is based on subject-matter expertise from the defence team. The goal of this presentation is to describe each stage of the DGMPRA PMF development process and to present and discuss the products of the DEP PMF (e.g., logic model). Specifically, first, a strategic framework was developed to provide a high-level overview of the strategic objectives, mission, and vision of the DEP. Next, Key Performance Questions were created based on the objectives in the strategic framework. A logic model detailing the activities, outputs (what is produced by the program activities), and intended outcomes of the program were developed to demonstrate how the program works. Finally, Key Performance Indicators were developed based on both the intended outcomes in the logic model and the Key Performance Questions in order to monitor program effectiveness. The Key Performance Indicators measure aspects of organizational ethics such as ethical conduct and decision-making, DEP collaborations, and knowledge and awareness of the Defence Ethics Code while leveraging ethics-related items from multiple DGMPRA surveys where appropriate.Keywords: defence ethics, ethical culture, organizational performance, performance measurement framework
Procedia PDF Downloads 1075475 Performance Evaluation of the CareSTART S1 Analyzer for Quantitative Point-Of-Care Measurement of Glucose-6-Phosphate Dehydrogenase Activity
Authors: Haiyoung Jung, Mi Joung Leem, Sun Hwa Lee
Abstract:
Background & Objective: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a genetic abnormality that results in an inadequate amount of G6PD, leading to increased susceptibility of red blood cells to reactive oxygen species and hemolysis. The present study aimed to evaluate the careSTARTTM S1 analyzer for measuring G6PD activity to hemoglobin (Hb) ratio. Methods: Precision for G6PD activity and hemoglobin measurement was evaluated using control materials with two levels on five repeated runs per day for five days. The analytic performance of the careSTARTTM S1 analyzer was compared with spectrophotometry in 40 patient samples. Reference ranges suggested by the manufacturer were validated in 20 healthy males and females each. Results: The careSTARTTM S1 analyzer demonstrated precision of 6.0% for low-level (14~45 U/dL) and 2.7% for high-level (60~90 U/dL) control in G6PD activity, and 1.4% in hemoglobin (7.9~16.3 u/g Hb). A comparison study of G6PD to Hb ratio between the careSTARTTM S1 analyzer and spectrophotometry showed an average difference of 29.1% with a positive bias of the careSTARTTM S1 analyzer. All normal samples from the healthy population were validated for the suggested reference range for males (≥2.19 U/g Hb) and females (≥5.83 U/g Hb). Conclusion: The careSTARTTM S1 analyzer demonstrated good analytical performance and can replace the current spectrophotometric measurement of G6PD enzyme activity. In the aspect of the management of clinical laboratories, it can be a reasonable option as a point-of-care analyzer with minimal handling of samples and reagents, in addition to the automatic calculation of the ratio of measured G6PD activity and Hb concentration, to minimize any clerical errors involved with manual calculation.Keywords: POCT, G6PD, performance evaluation, careSTART
Procedia PDF Downloads 655474 Mending Broken Fences Policing: Developing the Intelligence-Led/Community-Based Policing Model(IP-CP) and Quality/Quantity/Crime(QQC) Model
Authors: Anil Anand
Abstract:
Despite enormous strides made during the past decade, particularly with the adoption and expansion of community policing, there remains much that police leaders can do to improve police-public relations. The urgency is particularly evident in cities across the United States and Europe where an increasing number of police interactions over the past few years have ignited large, sometimes even national, protests against police policy and strategy, highlighting a gap between what police leaders feel they have archived in terms of public satisfaction, support, and legitimacy and the perception of bias among many marginalized communities. The decision on which one policing strategy is chosen over another, how many resources are allocated, and how strenuously the policy is applied resides primarily with the police and the units and subunits tasked with its enforcement. The scope and opportunity for police officers in impacting social attitudes and social policy are important elements that cannot be overstated. How do police leaders, for instance, decide when to apply one strategy—say community-based policing—over another, like intelligence-led policing? How do police leaders measure performance and success? Should these measures be based on quantitative preferences over qualitative, or should the preference be based on some other criteria? And how do police leaders define, allow, and control discretionary decision-making? Mending Broken Fences Policing provides police and security services leaders with a model based on social cohesion, that incorporates intelligence-led and community policing (IP-CP), supplemented by a quality/quantity/crime (QQC) framework to provide a four-step process for the articulable application of police intervention, performance measurement, and application of discretion.Keywords: social cohesion, quantitative performance measurement, qualitative performance measurement, sustainable leadership
Procedia PDF Downloads 2975473 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam
Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen
Abstract:
In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks
Procedia PDF Downloads 2135472 Quantitative Assessment of Soft Tissues by Statistical Analysis of Ultrasound Backscattered Signals
Authors: Da-Ming Huang, Ya-Ting Tsai, Shyh-Hau Wang
Abstract:
Ultrasound signals backscattered from the soft tissues are mainly depending on the size, density, distribution, and other elastic properties of scatterers in the interrogated sample volume. The quantitative analysis of ultrasonic backscattering is frequently implemented using the statistical approach due to that of backscattering signals tends to be with the nature of the random variable. Thus, the statistical analysis, such as Nakagami statistics, has been applied to characterize the density and distribution of scatterers of a sample. Yet, the accuracy of statistical analysis could be readily affected by the receiving signals associated with the nature of incident ultrasound wave and acoustical properties of samples. Thus, in the present study, efforts were made to explore such effects as the ultrasound operational modes and attenuation of biological tissue on the estimation of corresponding Nakagami statistical parameter (m parameter). In vitro measurements were performed from healthy and pathological fibrosis porcine livers using different single-element ultrasound transducers and duty cycles of incident tone burst ranging respectively from 3.5 to 7.5 MHz and 10 to 50%. Results demonstrated that the estimated m parameter tends to be sensitively affected by the use of ultrasound operational modes as well as the tissue attenuation. The healthy and pathological tissues may be characterized quantitatively by m parameter under fixed measurement conditions and proper calibration.Keywords: ultrasound backscattering, statistical analysis, operational mode, attenuation
Procedia PDF Downloads 3245471 Application Research of Stilbene Crystal for the Measurement of Accelerator Neutron Sources
Authors: Zhao Kuo, Chen Liang, Zhang Zhongbing, Ruan Jinlu. He Shiyi, Xu Mengxuan
Abstract:
Stilbene, C₁₄H₁₂, is well known as one of the most useful organic scintillators for pulse shape discrimination (PSD) technique for its good scintillation properties. An on-line acquisition system and an off-line acquisition system were developed with several CAMAC standard plug-ins, NIM plug-ins, neutron/γ discriminating plug-in named 2160A and a digital oscilloscope with high sampling rate respectively for which stilbene crystals and photomultiplier tube detectors (PMT) as detector for accelerator neutron sources measurement carried out in China Institute of Atomic Energy. Pulse amplitude spectrums and charge amplitude spectrums were real-time recorded after good neutron/γ discrimination whose best PSD figure-of-merits (FoMs) are 1.756 for D-D accelerator neutron source and 1.393 for D-T accelerator neutron source. The probability of neutron events in total events was 80%, and neutron detection efficiency was 5.21% for D-D accelerator neutron sources, which were 50% and 1.44% for D-T accelerator neutron sources after subtracting the background of scattering observed by the on-line acquisition system. Pulse waveform signals were acquired by the off-line acquisition system randomly while the on-line acquisition system working. The PSD FoMs obtained by the off-line acquisition system were 2.158 for D-D accelerator neutron sources and 1.802 for D-T accelerator neutron sources after waveform digitization off-line processing named charge integration method for just 1000 pulses. In addition, the probabilities of neutron events in total events obtained by the off-line acquisition system matched very well with the probabilities of the on-line acquisition system. The pulse information recorded by the off-line acquisition system could be repetitively used to adjust the parameters or methods of PSD research and obtain neutron charge amplitude spectrums or pulse amplitude spectrums after digital analysis with a limited number of pulses. The off-line acquisition system showed equivalent or better measurement effects compared with the online system with a limited number of pulses which indicated a feasible method based on stilbene crystals detectors for the measurement of prompt neutrons neutron sources like prompt accelerator neutron sources emit a number of neutrons in a short time.Keywords: stilbene crystal, accelerator neutron source, neutron / γ discrimination, figure-of-merits, CAMAC, waveform digitization
Procedia PDF Downloads 1875470 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software
Authors: Junior Akunzi
Abstract:
In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance
Procedia PDF Downloads 2165469 Analysis of the Acoustic Performance of Vertical Internal Seals with Pet Wool as NBR 15.575-4NO Green Towers Building-DF
Authors: Lucas Aerre, Wallesson Faria, Roberto Pimentel, Juliana Santos
Abstract:
An extremely disturbing and irritating element in the lives of people and organizations is the noise, the consequences that can bring us has a lot of connection with human health as well as financial and economic aspects. In order to improve the efficiency of buildings in Brazil in general, a performance standard was created, NBR 15.575 in which all buildings are seen in a more systemic and peculiar way, while following the requirements of the standard. The acoustic performance present in these buildings is one such requirement. Based on this, the present work was elaborated with the objective of evaluating through acoustic measurements the acoustic performance of vertical internal fences that are under the incidence of aerial noise of a building in the city of Brasilia-DF. A short theoretical basis is made and soon after the procedures of measurement are described through the control method established by the standard, and its results are evaluated according to the parameters of the same. The measurement performed between rooms of the same unit, presented a standardized sound pressure level difference (D nT, w) equal to 40 dB, thus being classified within the minimum performance required by the standard in question.Keywords: airborne noise, performance standard, soundproofing, vertical seal
Procedia PDF Downloads 2985468 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation
Authors: Arian Hosseini, Mahmudul Hasan
Abstract:
To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing
Procedia PDF Downloads 555467 Temperament and Psychopathology in Children of Patients Suffering from Schizophrenia
Authors: Rushi Naaz, Diksha Suchdeva
Abstract:
Background: Temperament is a very important aspect of functioning that needs to be understood in children of patients suffering from schizophrenia. The children of parents with mental disorder have substantially increased risk of psychiatric illness in them and may exhibit a range of problems from minor variations in temperament and adjustment to manifest psychiatric disorder. Method: A case control study was conducted to study the temperament characteristics and psychopathology in children of patients suffering from schizophrenia as compared to those of healthy controls. Both the groups were evaluated on Temperament Measurement Schedule and Childhood Psychopathology Measurement Schedule. Results: The results showed that children of patients suffering from schizophrenia were withdrawing, less adaptable, less sociable and had lower activity level than children of healthy parents. However, on the measure of psychopathology, no significant difference was found. Conclusion: Since temperament can be identified at an early age, children at risk for the disorder later on could be identified early enough for possible primary intervention.Keywords: children, childhood psychopathology, parental psychopathology, psychiatric disorders, schizophrenia, temperament
Procedia PDF Downloads 3725466 High Resolution Image Generation Algorithm for Archaeology Drawings
Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu
Abstract:
Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.Keywords: archaeology drawings, digital heritage, image generation, deep learning
Procedia PDF Downloads 60