Search results for: diagnostic error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2905

Search results for: diagnostic error

775 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network

Authors: Yasaman Sanayei, Alireza Bahiraie

Abstract:

This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.

Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis

Procedia PDF Downloads 400
774 Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument

Authors: Boyun Lee

Abstract:

This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.

Keywords: anthropomorphism, cladistic parsimony, comparative biology, mind-reading debate

Procedia PDF Downloads 164
773 Unusual Presentation of Colorectal Cancer within Inguinal Hernia: A Systemic Review of Reported Cases

Authors: Sena Park

Abstract:

Background: The concurrent presentation with colorectal cancer in the inguinal hernia has been extremely rare. Due to its rarity, its presentation may lead to diagnostic and therapeutic dilemmas. We aim to review all the reported cases on colorectal cancer incarcerated in the inguinal hernia in the last 20 years, and discuss the operative approaches. Methods: We identified all case reports on colorectal cancer within inguinal hernia using PUBMED (2002-2022) and MEDLINE (2002-2022). The search strategy included the following keywords: colorectal cancer (title/abstract) AND inguinal hernia (title/abstract) OR incarceration (title/abstract). The search did not include letters, book chapters, systemic reviews, meta-analysis and editorials. Results: In the last 20 years, a total of 19 cases on colorectal cancer within the inguinal hernia were identified. The age of the patients ranged between 48 and 89. Majority of the patients were male (95%). Most commonly involved part of the large intestine was sigmoid colon (79%). Of all the cases, 79 percent of patients received open procedure and 21 percent had laparoscopic procedure. Discussion: Inguinal hernias are common with an incidence of approximately 1.7 percent. Colorectal cancer is the one of the leading causes of cancer-related mortality worldwide. However, their concurrent presentation has been extremely rare. In the last 20 years, 19 cases on concurrent presentation of colorectal cancer and inguinal hernia have been reported. Most patients who had open procedures had two incisions of groin incision and a midline laparotomy. There were 4 cases where the oncological resection was performed laparoscopically. The advantages of laparoscopic resection include reduced blood lost, reduced post-operative pain, reduced length of hospital stay and similar number of lymph nodes taken. From the review of the cases in the last 20 years, both open and laparoscopic approaches seemed to be safe and achieve adequate oncological resections. Conclusion: This is a brief overview of reported cases of colorectal cancer presenting with inguinal hernia concurrently. Due to its rarity, there are no current guidelines on operative approach in clinical practice. The experience in the last 20 years supports both open and laparoscopic approach.

Keywords: colorectal cancer, inguinal hernia, incarceration, operative approach

Procedia PDF Downloads 95
772 Applying the Eye Tracking Technique for the Evaluation of Oculomotor System in Patients Survived after Cerebellar Tumors

Authors: Marina Shurupova, Victor Anisimov, Alexander Latanov

Abstract:

Background: The cerebellar lesions inevitably provoke oculomotor impairments in patients of different age. Symptoms of subtentorial tumors, particularly medulloblastomas, include static and dynamic coordination disorders (ataxia, asynergia, imbalance), hypo-muscle tonus, disruption of the cranial nerves, and within the oculomotor system - nystagmus (fine or gross). Subtentorial tumors can also affect the areas of cerebellum that control the oculomotor system. The noninvasive eye-tracking technology allows obtaining multiple oculomotor characteristics such as the number of fixations and their duration, amplitude, latency and velocity of saccades, trajectory and scan path of gaze during the process of the visual field navigation. Eye tracking could be very useful in clinical studies serving as convenient and effective tool for diagnostics. The aim: We studied the dynamics of oculomotor system functioning in patients undergoing remission from cerebellar tumors removal surgeries and following neurocognitive rehabilitation. Methods: 38 children (23 boys, 15 girls, 9-17 years old) that have recovered from the cerebellar tumor-removal surgeries, radiation therapy and chemotherapy and were undergoing course of neurocognitive rehabilitation participated in the study. Two tests were carried out to evaluate oculomotor performance - gaze stability test and counting test. The monocular eye movements were recorded with eye tracker ArringtonResearch (60 Hz). Two experimental sessions with both tests were conducted before and after rehabilitation courses. Results: Within the final session of both tests we observed remarkable improvement in oculomotor performance: 1) in the gaze stability test the spread of gaze positions significantly declined compared to the first session, and 2) the visual path in counting test significantly shortened both compared to the first session. Thus, neurocognitive rehabilitation improved the functioning of the oculomotor system in patients following the cerebellar tumor removal surgeries and subsequent therapy. Conclusions: The experimental data support the effectiveness of the utilization of the eye tracking technique as diagnostic tool in the field of neurooncology.

Keywords: eye tracking, rehabilitation, cerebellar tumors, oculomotor system

Procedia PDF Downloads 150
771 Possibilities of Postmortem CT to Detection of Gas Accumulations in the Vessels of Dead Newborns with Congenital Sepsis

Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh

Abstract:

It is well known that the gas formed as a result of postmortem decomposition of tissues can be detected already 24-48 hours after death. In addition, the conditions of keeping and storage of the corpse (temperature and humidity of the environment) significantly determine the rate of occurrence and development of posthumous changes. The presence of sepsis is accompanied by faster postmortem decomposition and decay of the organs and tissues of the body. The presence of gas in the vessels and cavities can be revealed fully at postmortem CT. Radiologists must certainly report on the detection of intraorganic or intravascular gas, wich was detected at postmortem CT, to forensic experts or pathologists before the autopsy. This gas can not be detected during autopsy, but it can be very important for establishing a diagnosis. To explore the possibility of postmortem CT for the evaluation of gas accumulations in the newborns' vessels, who died from congenital sepsis. Researched of 44 newborns bodies (25 male and 19 female sex, at the age from 6 hours to 27 days) after 6 - 12 hours of death. The bodies were stored in the refrigerator at a temperature of +4°C in the supine position. Grouped 12 bodies of newborns that died from congenital sepsis. The control group consisted of 32 bodies of newborns that died without signs of sepsis. Postmortem CT examination was performed at the GEMINI TF TOF16 device, before the autopsy. The localizations of gas accumulations in the vessels were determined on the CT tomograms. The sepsis diagnosis was on the basis of clinical and laboratory data and autopsy results. Gases in the vessels were detected in 33.3% of cases in the group with sepsis, and in the control group - in 34.4%. A group with sepsis most often the gas localized in the heart and liver vessels - 50% each, of observations number with the detected gas in the vessels. In the heart cavities, aorta and mesenteric vessels - 25% each. In control most often gas was detected in the liver (63.6%) and abdominal cavity (54.5%) vessels. In 45.5% the gas localized in the cavities, and in 36.4% in the vessels of the heart. In the cerebral vessels and in the aorta gas was detected in 27.3% and 9.1%, respectively. Postmortem CT has high diagnostic capabilities to detect free gas in vessels. Postmortem changes in newborns that died from sepsis do not affect intravascular gas production within 6-12 hours. Radiation methods should be used as a supplement to the autopsy, including as a kind of ‘guide’, with the indication to the forensic medical expert of certain changes identified during CT studies, for better definition of pathological processes during the autopsy. Postmortem CT can be recommend as a first stage of autopsy.

Keywords: congenital sepsis, gas, newborn, postmortem CT

Procedia PDF Downloads 138
770 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming

Authors: Hadi Gholizadeh, Ali Tajdin

Abstract:

To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.

Keywords: goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression

Procedia PDF Downloads 219
769 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 411
768 Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time

Authors: Xinwen Zhu, Xingguang Li, Sun Yi

Abstract:

Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around  ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper.

Keywords: LiDAR, depth camera, real-time, detection and measurement

Procedia PDF Downloads 208
767 A Study of Cost and Revenue Earned from Tourist Walking Street Activities in Songkhla City Municipality, Thailand

Authors: Weerawan Marangkun

Abstract:

This study is a survey intended to investigate cost, revenue and factors affecting changes in revenue and to provide guidelines for improving factors affecting changes in revenue from tourist walking street activities in Songkhla City Municipality. Instruments used in this study were structured interviews, using Yaman table (1973) where the random sampling error was+ 10%. The sample consisting of 83 entrepreneurs were drawn from a total population of 272. The purposive sampling method was used. Data were collected during the 6-month period from December 2011 until May 2012. The findings indicate that the cost paid by an entrepreneur in connection with his/her services for tourists is mainly for travel, which stands at about 290 Baht per day. Each entrepreneur earns about 3,850 Baht per day, which means about 400,000 Baht per year. The combined total revenue from walking street tourist activities is about 108.8 million Baht per year. Such activities add economic value to tourist facilities due to expenditures by tourists and provide the entrepreneurs with considerable income. Factors affecting changes in revenue from tourist walking street activities are: the increase in the number of entrepreneurs; the holding of trade fairs, events or interesting shows in the vicinity; and weather conditions (e.g. abundant rainfall, which can contribute to a decrease in the number of tourists). Suggested measures to improve factors affecting changes in the income are: addition or creation of new activities; regulation of operations of the stalls and parking area; and generation of greater publicity through the social network.

Keywords: cost, revenue, tourist, walking street

Procedia PDF Downloads 351
766 In-House Fatty Meal Cholescintigraphy as a Screening Tool in Patients Presenting with Dyspepsia

Authors: Avani Jain, S. Shelley, M. Indirani, Shilpa Kalal, Jaykanth Amalachandran

Abstract:

Aim: To evaluate the prevalence of gall bladder dysfunction in patients with dyspepsia using In-House fatty meal cholescintigraphy. Materials & Methods: This study is a prospective cohort study. 59 healthy volunteers with no dyspeptic complaints and negative ultrasound and endoscopy were recruited in study. 61 patients having complaint of dyspepsia for duration of more than 6 months were included. All of them underwent 99mTc-Mebrofenin fatty meal cholescintigraphy following a standard protocol. Dynamic acquisitions were acquired for 120 minutes with an In-House fatty meal being given at 45th minute. Gall bladder emptying kinetics was determined with gall bladder ejection fractions (GBEF) calculated at 30minutes, 45minutes and at 60 minutes (30min, 45min & 60 min). Standardization of fatty meal was done for volunteers. Receiver operating characteristic (ROC) analysis was used assess the diagnostic accuracy of 3 time points (30min, 45min & 60 min) used for measuring gall bladder emptying. On the basis of cut off derived from volunteers, the patients were assessed for gall bladder dysfunction. Results: In volunteers, the GBEF at 30 min was 74.42±8.26 % (mean ±SD), at 45 min was 82.61 ± 6.5 % and at 60 min was 89.37±4.48%, compared to patients where at 30min it was 33.73±22.87%, at 45 min it was 43.03±26.97% and at 60 min it was 51.85±29.60%. The lower limit of GBEF in volunteers at 30 min was 60%, 45 min was 69% and at 60 min was 81%. ROC analysis showed that area under curve was largest for 30 min GBEF (0.952; 95% CI = 0.914-0.989) and that all the 3 measures were statistically significant (p < 0.005). Majority of the volunteers had 74% of gall bladder emptying by 30 minutes; hence it was taken as an optimum cutoff time to assess gall bladder contraction. > 60% GBEF at 30 min post fatty meal was considered as normal and < 60% GBEF as indicative of gall bladder dysfunction. In patients, various causes for dyspepsia were identified: GB dysfunction (63.93%), Peptic ulcer (8.19 %), Gastroesophageal reflux disease (8.19%), Gastritis (4.91%). In 18.03% of cases GB dysfunction coexisted with other gastrointestinal conditions. The diagnosis of functional dyspepsia was made in 14.75% of cases. Conclusions: Gall bladder dysfunction contributes significantly to the causation of dyspepsia. It could coexist with various other gastrointestinal diseases. Fatty meal was well tolerated and devoid of any side effects. Many patients who are labeled as functional dyspeptics could actually have gall bladder dysfunction. Hence as an adjunct to ultrasound and endoscopy, fatty meal cholescintigraphy can also be used as a screening modality in characterization of dyspepsia.

Keywords: in-house fatty meal, choescintigraphy, dyspepsia, gall bladder ejection fraction, functional dyspepsia

Procedia PDF Downloads 498
765 Quantum Inspired Security on a Mobile Phone

Authors: Yu Qin, Wanjiaman Li

Abstract:

The widespread use of mobile electronic devices increases the complexities of mobile security. This thesis aims to provide a secure communication environment for smartphone users. Some research proves that the one-time pad is one of the securest encryption methods, and that the key distribution problem can be solved by using the QKD (quantum key distribution). The objective of this project is to design an Android APP (application) to exchange several random keys between mobile phones. Inspired by QKD, the developed APP uses the quick response (QR) code as a carrier to dispatch large amounts of one-time keys. After evaluating the performance of APP, it allows the mobile phone to capture and decode 1800 bytes of random data in 600ms. The continuous scanning mode of APP is designed to improve the overall transmission performance and user experience, and the maximum transmission rate of this mode is around 2200 bytes/s. The omnidirectional readability and error correction capability of QR code gives it a better real-life application, and the features of adequate storage capacity and quick response optimize overall transmission efficiency. The security of this APP is guaranteed since QR code is exchanged face-to-face, eliminating the risk of being eavesdropped. Also, the id of QR code is the only message that would be transmitted through the whole communication. The experimental results show this project can achieve superior transmission performance, and the correlation between the transmission rate of the system and several parameters, such as the QR code size, has been analyzed. In addition, some existing technologies and the main findings in the context of the project are summarized and critically compared in detail.

Keywords: one-time pad, QKD (quantum key distribution), QR code, application

Procedia PDF Downloads 140
764 Software Transactional Memory in a Dynamic Programming Language at Virtual Machine Level

Authors: Szu-Kai Hsu, Po-Ching Lin

Abstract:

As more and more multi-core processors emerge, traditional sequential programming paradigm no longer suffice. Yet only few modern dynamic programming languages can leverage such advantage. Ruby, for example, despite its wide adoption, only includes threads as a simple parallel primitive. The global virtual machine lock of official Ruby runtime makes it impossible to exploit full parallelism. Though various alternative Ruby implementations do eliminate the global virtual machine lock, they only provide developers dated locking mechanism for data synchronization. However, traditional locking mechanism error-prone by nature. Software Transactional Memory is one of the promising alternatives among others. This paper introduces a new virtual machine: GobiesVM to provide a native software transactional memory based solution for dynamic programming languages to exploit parallelism. We also proposed a simplified variation of Transactional Locking II algorithm. The empirical results of our experiments show that support of STM at virtual machine level enables developers to write straightforward code without compromising parallelism or sacrificing thread safety. Existing source code only requires minimal or even none modi cation, which allows developers to easily switch their legacy codebase to a parallel environment. The performance evaluations of GobiesVM also indicate the difference between sequential and parallel execution is significant.

Keywords: global interpreter lock, ruby, software transactional memory, virtual machine

Procedia PDF Downloads 273
763 Medication Errors in a Juvenile Justice Youth Development Center

Authors: Tanja Salary

Abstract:

This paper discusses a study conducted in a juvenile justice facility regarding medication errors. It includes an introduction to data collected about medication errors in a juvenile justice facility from 2011 - 2019 and explores contributing factors that relate to those errors. The data was obtained from electronic incident records of medication errors that were documented from the years 2011 through 2019. In addition, the presentation reviews both current and historical research of empirical data about patient safety standards and quality care comparing traditional health care facilities to juvenile justice residential facilities and acknowledges a gap in research. The theoretical/conceptual framework for the research study was Bandura and Adams’s self-efficacy theory of behavioral change and Mark Friedman’s results-based accountability theory. Despite the lack of evidence in previous studies addressing medication errors in juvenile justice facilities, this presenter will share information that adds to the body of knowledge, including the potential relationship of medication errors and contributing factors of race and age. Implications for future research include the effect that education and training will have on the communication among juvenile justice staff, including nurses, who administer medications to juveniles to ensure adherence to patient safety standards. There are several opportunities for future research concerning other characteristics about factors that may affect medication administration errors within the residential juvenile justice facility.

Keywords: Juvenile justice, medication errors, juveniles, error reduction strategies

Procedia PDF Downloads 56
762 Merits and Demerits of Participation of Fellow Examinee as Subjects in Observed Structured Practical Examination in Physiology

Authors: Mohammad U. A. Khan, Md. D. Hossain

Abstract:

Background: Department of Physiology finds difficulty in managing ‘subjects’ in practical procedure. To avoid this difficulty fellow examinees of other group may be used as subjects. Objective: To find out the merits and demerits of using fellow examinees as subjects in the practical procedure. Method: This cross-sectional descriptive study was conducted in the Department of Physiology, Noakhali Medical College, Bangladesh during May-June’14. Forty-two 1st year undergraduate medical students from a selected public medical college of Bangladesh were enrolled for the study purposively. Consent of students and authority was taken. Eighteen of them were selected as subjects and designated as subject-examinees. Other fellow examinees (non-subject) examined their blood pressure and pulse as part of ‘observed structured practical examination’ (OSPE). The opinion of all examinees regarding the merits and demerits of using fellow examinee as subjects in the practical procedure was recorded. Result: Examinees stated that they could perform their practical procedure without nervousness (24/42, 57.14%), accurately and comfortably (14/42, 33.33%) and subjects were made available without wasting time (2/42, 4.76%). Nineteen students (45.24%) found no disadvantage and 2 (4.76%) felt embracing when the subject was of opposite sex. The subject-examinees narrated that they could learn from the errors done by their fellow examinee (11/18, 61.1%). 75% non-subject examinees expressed their willingness to be subject so that they can learn from their fellows’ error. Conclusion: Using fellow examinees as subjects is beneficial for both the non-subject and subject examinees. Funding sources: Navana, Beximco, Unihealth, Square & Acme Pharma, Bangladesh Ltd.

Keywords: physiology, teaching, practical, OSPE

Procedia PDF Downloads 143
761 Coordinated Interference Canceling Algorithm for Uplink Massive Multiple Input Multiple Output Systems

Authors: Messaoud Eljamai, Sami Hidouri

Abstract:

Massive multiple-input multiple-output (MIMO) is an emerging technology for new cellular networks such as 5G systems. Its principle is to use many antennas per cell in order to maximize the network's spectral efficiency. Inter-cellular interference remains a fundamental problem. The use of massive MIMO will not derogate from the rule. It improves performances only when the number of antennas is significantly greater than the number of users. This, considerably, limits the networks spectral efficiency. In this paper, a coordinated detector for an uplink massive MIMO system is proposed in order to mitigate the inter-cellular interference. The proposed scheme combines the coordinated multipoint technique with an interference-cancelling algorithm. It requires the serving cell to send their received symbols, after processing, decision and error detection, to the interfered cells via a backhaul link. Each interfered cell is capable of eliminating intercellular interferences by generating and subtracting the user’s contribution from the received signal. The resulting signal is more reliable than the original received signal. This allows the uplink massive MIMO system to improve their performances dramatically. Simulation results show that the proposed detector improves system spectral efficiency compared to classical linear detectors.

Keywords: massive MIMO, COMP, interference canceling algorithm, spectral efficiency

Procedia PDF Downloads 137
760 Evaluation of Vehicle Classification Categories: Florida Case Study

Authors: Ren Moses, Jaqueline Masaki

Abstract:

This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.

Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic

Procedia PDF Downloads 173
759 Local Differential Privacy-Based Data-Sharing Scheme for Smart Utilities

Authors: Veniamin Boiarkin, Bruno Bogaz Zarpelão, Muttukrishnan Rajarajan

Abstract:

The manufacturing sector is a vital component of most economies, which leads to a large number of cyberattacks on organisations, whereas disruption in operation may lead to significant economic consequences. Adversaries aim to disrupt the production processes of manufacturing companies, gain financial advantages, and steal intellectual property by getting unauthorised access to sensitive data. Access to sensitive data helps organisations to enhance the production and management processes. However, the majority of the existing data-sharing mechanisms are either susceptible to different cyber attacks or heavy in terms of computation overhead. In this paper, a privacy-preserving data-sharing scheme for smart utilities is proposed. First, a customer’s privacy adjustment mechanism is proposed to make sure that end-users have control over their privacy, which is required by the latest government regulations, such as the General Data Protection Regulation. Secondly, a local differential privacy-based mechanism is proposed to ensure the privacy of the end-users by hiding real data based on the end-user preferences. The proposed scheme may be applied to different industrial control systems, whereas in this study, it is validated for energy utility use cases consisting of smart, intelligent devices. The results show that the proposed scheme may guarantee the required level of privacy with an expected relative error in utility.

Keywords: data-sharing, local differential privacy, manufacturing, privacy-preserving mechanism, smart utility

Procedia PDF Downloads 66
758 Molecular Detection of Leishmania from the Phlebotomus Genus: Tendency towards Leishmaniasis Regression in Constantine, North-East of Algeria

Authors: K. Frahtia, I. Mihoubi, S. Picot

Abstract:

Leishmaniasis is a group of parasitic disease with a varied clinical expression caused by flagellate protozoa of the Leishmania genus. These diseases are transmitted to humans and animals by the sting of a vector insect, the female sandfly. Among the groups of dipteral disease vectors, Phlebotominae occupy a prime position and play a significant role in human pathology, such as leishmaniasis that affects nearly 350 million people worldwide. The vector control operation launched by health services throughout the country proves to be effective since despite the prevalence of the disease remains high especially in rural areas, leishmaniasis appears to be declining in Algeria. In this context, this study mainly concerns molecular detection of Leishmania from the vector. Furthermore, a molecular diagnosis has also been made on skin samples taken from patients in the region of Constantine, located in the North-East of Algeria. Concerning the vector, 5858 sandflies were captured, including 4360 males and 1498 females. Male specimens were identified based on their morphological. The morphological identification highlighted the presence of the Phlebotomus genus with a prevalence of 93% against 7% represented by the Sergentomyia genus. About the identified species, P. perniciosus is the most abundant with 59.4% of the male identified population followed by P. longicuspis with 24.7% of the workforce. P. perfiliewi is poorly represented by 6.7% of specimens followed by P. papatasi with 2.2% and 1.5% S. dreyfussi. Concerning skin samples, 45/79 (56.96%) collected samples were found positive by real-time PCR. This rate appears to be in sharp decline compared to previous years (alert peak of 30,227 cases in 2005). Concerning the detection of Leishmania from sandflies by RT-PCR, the results show that 3/60 PCR performed genus are positive with melting temperatures corresponding to that of the reference strain (84.1 +/- 0.4 ° C for L. infantum). This proves that the vectors were parasitized. On the other side, identification by RT-PCR species did not give any results. This could be explained by the presence of an insufficient amount of leishmanian DNA in the vector, and therefore support the hypothesis of the regression of leishmaniasis in Constantine.

Keywords: Algeria, molecular diagnostic, phlebotomus, real time PCR

Procedia PDF Downloads 262
757 Integration of Virtual Learning of Induction Machines for Undergraduates

Authors: Rajesh Kumar, Puneet Aggarwal

Abstract:

In context of understanding problems faced by undergraduate students while carrying out laboratory experiments dealing with high voltages, it was found that most of the students are hesitant to work directly on machine. The reason is that error in the circuitry might lead to deterioration of machine and laboratory instruments. So, it has become inevitable to include modern pedagogic techniques for undergraduate students, which would help them to first carry out experiment in virtual system and then to work on live circuit. Further advantages include that students can try out their intuitive ideas and perform in virtual environment, hence leading to new research and innovations. In this paper, virtual environment used is of MATLAB/Simulink for three-phase induction machines. The performance analysis of three-phase induction machine is carried out using virtual environment which includes Direct Current (DC) Test, No-Load Test, and Block Rotor Test along with speed torque characteristics for different rotor resistances and input voltage, respectively. Further, this paper carries out computer aided teaching of basic Voltage Source Inverter (VSI) drive circuitry. Hence, this paper gave undergraduates a clearer view of experiments performed on virtual machine (No-Load test, Block Rotor test and DC test, respectively). After successful implementation of basic tests, VSI circuitry is implemented, and related harmonic distortion (THD) and Fast Fourier Transform (FFT) of current and voltage waveform are studied.

Keywords: block rotor test, DC test, no load test, virtual environment, voltage source inverter

Procedia PDF Downloads 342
756 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively

Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus

Abstract:

Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.

Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment

Procedia PDF Downloads 81
755 Anomaly Detection in Financial Markets Using Tucker Decomposition

Authors: Salma Krafessi

Abstract:

The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.

Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models

Procedia PDF Downloads 50
754 Computational Modeling of Heat Transfer from a Horizontal Array Cylinders for Low Reynolds Numbers

Authors: Ovais U. Khan, G. M. Arshed, S. A. Raza, H. Ali

Abstract:

A numerical model based on the computational fluid dynamics (CFD) approach is developed to investigate heat transfer across a longitudinal row of six circular cylinders. The momentum and energy equations are solved using the finite volume discretization technique. The convective terms are discretized using a second-order upwind methodology, whereas diffusion terms are discretized using a central differencing scheme. The second-order implicit technique is utilized to integrate time. Numerical simulations have been carried out for three different values of free stream Reynolds number (ReD) 100, 200, 300 and two different values of dimensionless longitudinal pitch ratio (SL/D) 1.5, 2.5 to demonstrate the fluid flow and heat transfer behavior. Numerical results are validated with the analytical findings reported in the literature and have been found to be in good agreement. The maximum percentage error in values of the average Nusselt number obtained from the numerical and analytical solutions is in the range of 10% for the free stream Reynolds number up to 300. It is demonstrated that the average Nusselt number for the array of cylinders increases with increasing the free stream Reynolds number and dimensionless longitudinal pitch ratio. The information generated would be useful in the design of more efficient heat exchangers or other fluid systems involving arrays of cylinders.

Keywords: computational fluid dynamics, array of cylinders, longitudinal pitch ratio, finite volume method, incompressible navier-stokes equations

Procedia PDF Downloads 73
753 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings

Authors: A. W. J. Wong, I. H. Ibrahim

Abstract:

Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.

Keywords: buildings, CFD Simulations, natural ventilation, urban airflow

Procedia PDF Downloads 212
752 Prediction of Distillation Curve and Reid Vapor Pressure of Dual-Alcohol Gasoline Blends Using Artificial Neural Network for the Determination of Fuel Performance

Authors: Leonard D. Agana, Wendell Ace Dela Cruz, Arjan C. Lingaya, Bonifacio T. Doma Jr.

Abstract:

The purpose of this paper is to study the predict the fuel performance parameters, which include drivability index (DI), vapor lock index (VLI), and vapor lock potential using distillation curve and Reid vapor pressure (RVP) of dual alcohol-gasoline fuel blends. Distillation curve and Reid vapor pressure were predicted using artificial neural networks (ANN) with macroscopic properties such as boiling points, RVP, and molecular weights as the input layers. The ANN consists of 5 hidden layers and was trained using Bayesian regularization. The training mean square error (MSE) and R-value for the ANN of RVP are 91.4113 and 0.9151, respectively, while the training MSE and R-value for the distillation curve are 33.4867 and 0.9927. Fuel performance analysis of the dual alcohol–gasoline blends indicated that highly volatile gasoline blended with dual alcohols results in non-compliant fuel blends with D4814 standard. Mixtures of low-volatile gasoline and 10% methanol or 10% ethanol can still be blended with up to 10% C3 and C4 alcohols. Intermediate volatile gasoline containing 10% methanol or 10% ethanol can still be blended with C3 and C4 alcohols that have low RVPs, such as 1-propanol, 1-butanol, 2-butanol, and i-butanol. Biography: Graduate School of Chemical, Biological, and Materials Engineering and Sciences, Mapua University, Muralla St., Intramuros, Manila, 1002, Philippines

Keywords: dual alcohol-gasoline blends, distillation curve, machine learning, reid vapor pressure

Procedia PDF Downloads 91
751 Mitigation of Interference in Satellite Communications Systems via a Cross-Layer Coding Technique

Authors: Mario A. Blanco, Nicholas Burkhardt

Abstract:

An important problem in satellite communication systems which operate in the Ka and EHF frequency bands consists of the overall degradation in link performance of mobile terminals due to various types of degradations in the link/channel, such as fading, blockage of the link to the satellite (especially in urban environments), intentional as well as other types of interference, etc. In this paper, we focus primarily on the interference problem, and we develop a very efficient and cost-effective solution based on the use of fountain codes. We first introduce a satellite communications (SATCOM) terminal uplink interference channel model that is classically used against communication systems that use spread-spectrum waveforms. We then consider the use of fountain codes, with focus on Raptor codes, as our main mitigation technique to combat the degradation in link/receiver performance due to the interference signal. The performance of the receiver is obtained in terms of average probability of bit and message error rate as a function of bit energy-to-noise density ratio, Eb/N0, and other parameters of interest, via a combination of analysis and computer simulations, and we show that the use of fountain codes is extremely effective in overcoming the effects of intentional interference on the performance of the receiver and associated communication links. We then show this technique can be extended to mitigate other types of SATCOM channel degradations, such as those caused by channel fading, shadowing, and hard-blockage of the uplink signal.

Keywords: SATCOM, interference mitigation, fountain codes, turbo codes, cross-layer

Procedia PDF Downloads 351
750 Concrete Mix Design Using Neural Network

Authors: Rama Shanker, Anil Kumar Sachan

Abstract:

Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.

Keywords: aggregate proportions, artificial neural network, concrete grade, concrete mix design

Procedia PDF Downloads 380
749 The Evaluation of Complete Blood Cell Count-Based Inflammatory Markers in Pediatric Obesity and Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is defined as a severe chronic disease characterized by a low-grade inflammatory state. Therefore, inflammatory markers gained utmost importance during the evaluation of obesity and metabolic syndrome (MetS), a disease characterized by central obesity, elevated blood pressure, increased fasting blood glucose and elevated triglycerides or reduced high density lipoprotein cholesterol (HDL-C) values. Some inflammatory markers based upon complete blood cell count (CBC) are available. In this study, it was questioned which inflammatory marker was the best to evaluate the differences between various obesity groups. 514 pediatric individuals were recruited. 132 children with MetS, 155 morbid obese (MO), 90 obese (OB), 38 overweight (OW) and 99 children with normal BMI (N-BMI) were included into the scope of this study. Obesity groups were constituted using age- and sex-dependent body mass index (BMI) percentiles tabulated by World Health Organization. MetS components were determined to be able to specify children with MetS. CBC were determined using automated hematology analyzer. HDL-C analysis was performed. Using CBC parameters and HDL-C values, ratio markers of inflammation, which cover neutrophil-to-lymphocyte ratio (NLR), derived neutrophil-to-lymphocyte ratio (dNLR), platelet-to-lymphocyte ratio (PLR), lymphocyte-to-monocyte ratio (LMR), monocyte-to-HDL-C ratio (MHR) were calculated. Statistical analyses were performed. The statistical significance degree was considered as p < 0.05. There was no statistically significant difference among the groups in terms of platelet count, neutrophil count, lymphocyte count, monocyte count, and NLR. PLR differed significantly between OW and N-BMI as well as MetS. Monocyte-to HDL-C value exhibited statistical significance between MetS and N-BMI, OB, and MO groups. HDL-C value differed between MetS and N-BMI, OW, OB, MO groups. MHR was the ratio, which exhibits the best performance among the other CBC-based inflammatory markers. On the other hand, when MHR was compared to HDL-C only, it was suggested that HDL-C has given much more valuable information. Therefore, this parameter still keeps its value from the diagnostic point of view. Our results suggest that MHR can be an inflammatory marker during the evaluation of pediatric MetS, but the predictive value of this parameter was not superior to HDL-C during the evaluation of obesity.

Keywords: children, complete blood cell count, high density lipoprotein cholesterol, metabolic syndrome, obesity

Procedia PDF Downloads 119
748 The Relationship Between Hourly Compensation and Unemployment Rate Using the Panel Data Regression Analysis

Authors: S. K. Ashiquer Rahman

Abstract:

the paper concentrations on the importance of hourly compensation, emphasizing the significance of the unemployment rate. There are the two most important factors of a nation these are its unemployment rate and hourly compensation. These are not merely statistics but they have profound effects on individual, families, and the economy. They are inversely related to one another. When we consider the unemployment rate that will probably decline as hourly compensations in manufacturing rise. But when we reduced the unemployment rates and increased job prospects could result from higher compensation. That’s why, the increased hourly compensation in the manufacturing sector that could have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, we use panel data regression models to evaluate the expected link between hourly compensation and unemployment rate in order to determine the effect of hourly compensation on unemployment rate. We estimate the fixed effects model, evaluate the error components, and determine which model (the FEM or ECM) is better by pooling all 60 observations. We then analysis and review the data by comparing 3 several countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of the extensive research on how the hourly compensation effects on the unemployment rate. Additionally, this paper offers relevant and useful informational to help the government and academic community use an econometrics and social approach to lessen on the effect of the hourly compensation on Unemployment rate to eliminate the problem.

Keywords: hourly compensation, Unemployment rate, panel data regression models, dummy variables, random effects model, fixed effects model, the linear regression model

Procedia PDF Downloads 67
747 Computed Tomography Myocardial Perfusion on a Patient with Hypertrophic Cardiomyopathy

Authors: Jitendra Pratap, Daphne Prybyszcuk, Luke Elliott, Arnold Ng

Abstract:

Introduction: Coronary CT angiography is a non-invasive imaging technique for the assessment of coronary artery disease and has high sensitivity and negative predictive value. However, the correlation between the degree of CT coronary stenosis and the significance of hemodynamic obstruction is poor. The assessment of myocardial perfusion has mostly been undertaken by Nuclear Medicine (SPECT), but it is now possible to perform stress myocardial CT perfusion (CTP) scans quickly and effectively using CT scanners with high temporal resolution. Myocardial CTP is in many ways similar to neuro perfusion imaging technique, where radiopaque iodinated contrast is injected intravenously, transits the pulmonary and cardiac structures, and then perfuses through the coronary arteries into the myocardium. On the Siemens Force CT scanner, a myocardial perfusion scan is performed using a dynamic axial acquisition, where the scanner shuffles in and out every 1-3 seconds (heart rate dependent) to be able to cover the heart in the z plane. This is usually performed over 38 seconds. Report: A CT myocardial perfusion scan can be utilised to complement the findings of a CT Coronary Angiogram. Implementing a CT Myocardial Perfusion study as part of a routine CT Coronary Angiogram procedure provides a ‘One Stop Shop’ for diagnosis of coronary artery disease. This case study demonstrates that although the CT Coronary Angiogram was within normal limits, the perfusion scan provided additional, clinically significant information in regards to the haemodynamics within the myocardium of a patient with Hypertrophic Obstructive Cardio Myopathy (HOCM). This negated the need for further diagnostics studies such as cardiac ECHO or Nuclear Medicine Stress tests. Conclusion: CT coronary angiography with adenosine stress myocardial CTP was utilised in this case to specifically exclude coronary artery disease in conjunction with accessing perfusion within the hypertrophic myocardium. Adenosine stress myocardial CTP demonstrated the reduced myocardial blood flow within the hypertrophic myocardium, but the coronary arteries did not show any obstructive disease. A CT coronary angiogram scan protocol that incorporates myocardial perfusion can provide diagnostic information on the haemodynamic significance of any coronary artery stenosis and has the potential to be a “One Stop Shop” for cardiac imaging.

Keywords: CT, cardiac, myocardium, perfusion

Procedia PDF Downloads 116
746 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 206