Search results for: Medical imaging & processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2296

Search results for: Medical imaging & processing

1606 Design of Compliant Mechanism Based Microgripper with Three Finger Using Topology Optimization

Authors: R. Bharanidaran, B. T. Ramesh

Abstract:

High precision in motion is required to manipulate the micro objects in precision industries for micro assembly, cell manipulation etc. Precision manipulation is achieved based on the appropriate mechanism design of micro devices such as microgrippers. Design of a compliant based mechanism is the better option to achieve a highly precised and controlled motion. This research article highlights the method of designing a compliant based three fingered microgripper suitable for holding asymmetric objects. Topological optimization technique, a systematic method is implemented in this research work to arrive a topologically optimized design of the mechanism needed to perform the required micro motion of the gripper. Optimization technique has a drawback of generating senseless regions such as node to node connectivity and staircase effect at the boundaries. Hence, it is required to have post processing of the design to make it manufacturable. To reduce the effect of post processing stage and to preserve the edges of the image, a cubic spline interpolation technique is introduced in the MATLAB program. Structural performance of the topologically developed mechanism design is tested using finite element method (FEM) software. Further the microgripper structure is examined to find its fatigue life and vibration characteristics.

Keywords: Compliant mechanism, Cubic spline interpolation, FEM, Topology optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3554
1605 CT Reconstruction from a Limited Number of X-Ray Projections

Authors: Tao Quang Bang, Insu Jeon

Abstract:

Most CT reconstruction system x-ray computed tomography (CT) is a well established visualization technique in medicine and nondestructive testing. However, since CT scanning requires sampling of radiographic projections from different viewing angles, common CT systems with mechanically moving parts are too slow for dynamic imaging, for instance of multiphase flows or live animals. A large number of X-ray projections are needed to reconstruct CT images, so the collection and calculation of the projection data consume too much time and harmful for patient. For the purpose of solving the problem, in this study, we proposed a method for tomographic reconstruction of a sample from a limited number of x-ray projections by using linear interpolation method. In simulation, we presented reconstruction from an experimental x-ray CT scan of a Aluminum phantom that follows to two steps: X-ray projections will be interpolated using linear interpolation method and using it for CT reconstruction based upon Ordered Subsets Expectation Maximization (OSEM) method.

Keywords: CT reconstruction, X-ray projections, Interpolation technique, OSEM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2369
1604 AI-Driven Cloud Security: Proactive Defense Against Evolving Cyber Threats

Authors: Ashly Joseph

Abstract:

Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the data, reputation, and financial stability of enterprises. Conventional security methods can become inadequate when confronted with ever intricate and dynamic threats. Artificial Intelligence (AI) technologies possess the capacity to significantly transform cloud security through their ability to promptly identify and thwart assaults, adjust to emerging risks, and offer intelligent perspectives for proactive security actions. The objective of this research study is to investigate the utilization of AI technologies in augmenting the security measures within cloud computing systems. This paper aims to offer significant insights and recommendations for businesses seeking to protect their cloud-based assets by analyzing the present state of cloud security, the capabilities of AI, and the possible advantages and obstacles associated with using AI into cloud security policies.

Keywords: Machine Learning, Natural Learning Processing, Denial-of-Service attacks, Sentiment Analysis, Cloud computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66
1603 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 441
1602 Investigation of Water Transport Dynamics in Polymer Electrolyte Membrane Fuel Cells Based on a Gas Diffusion Media Layers

Authors: Saad S. Alrwashdeh, Henning Markötter, Handri Ammari, Jan Haußmann, Tobias Arlt, Joachim Scholta, Ingo Manke

Abstract:

In this investigation, synchrotron X-ray imaging is used to study water transport inside polymer electrolyte membrane fuel cells. Two measurement techniques are used, namely in-situ radiography and quasi-in-situ tomography combining together in order to reveal the relationship between the structures of the microporous layers (MPLs) and the gas diffusion layers (GDLs), the operation temperature and the water flow. The developed cell is equipped with a thick GDL and a high back pressure MPL. It is found that these modifications strongly influence the overall water transport in the whole adjacent GDM.

Keywords: Polymer electrolyte membrane fuel cell, microporous layer, water transport, radiography, tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 722
1601 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences

Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng

Abstract:

Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). 

Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
1600 A System for Analyzing and Eliciting Public Grievances Using Cache Enabled Big Data

Authors: P. Kaladevi, N. Giridharan

Abstract:

The system for analyzing and eliciting public grievances serves its main purpose to receive and process all sorts of complaints from the public and respond to users. Due to the more number of complaint data becomes big data which is difficult to store and process. The proposed system uses HDFS to store the big data and uses MapReduce to process the big data. The concept of cache was applied in the system to provide immediate response and timely action using big data analytics. Cache enabled big data increases the response time of the system. The unstructured data provided by the users are efficiently handled through map reduce algorithm. The processing of complaints takes place in the order of the hierarchy of the authority. The drawbacks of the traditional database system used in the existing system are set forth by our system by using Cache enabled Hadoop Distributed File System. MapReduce framework codes have the possible to leak the sensitive data through computation process. We propose a system that add noise to the output of the reduce phase to avoid signaling the presence of sensitive data. If the complaints are not processed in the ample time, then automatically it is forwarded to the higher authority. Hence it ensures assurance in processing. A copy of the filed complaint is sent as a digitally signed PDF document to the user mail id which serves as a proof. The system report serves to be an essential data while making important decisions based on legislation.

Keywords: Big Data, Hadoop, HDFS, Caching, MapReduce, web personalization, e-governance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
1599 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
1598 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures

Authors: C. Lodato, S. Lopes

Abstract:

The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.

Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
1597 An Experimentally Validated Thermo- Mechanical Finite Element Model for Friction Stir Welding in Carbon Steels

Authors: A. H. Kheireddine, A. A. Khalil, A. H. Ammouri, G. T. Kridli, R. F. Hamade

Abstract:

Solidification cracking and hydrogen cracking are some defects generated in the fusion welding of ultrahigh carbon steels. However, friction stir welding (FSW) of such steels, being a solid-state technique, has been demonstrated to alleviate such problems encountered in traditional welding. FSW include different process parameters that must be carefully defined prior processing. These parameters included but not restricted to: tool feed, tool RPM, tool geometry, tool tilt angle. These parameters form a key factor behind avoiding warm holes and voids behind the tool and in achieving a defect-free weld. More importantly, these parameters directly affect the microstructure of the weld and hence the final mechanical properties of weld. For that, 3D finite element (FE) thermo-mechanical model was developed using DEFORM 3D to simulate FSW of carbon steel. At points of interest in the joint, tracking is done for history of critical state variables such as temperature, stresses, and strain rates. Typical results found include the ability to simulate different weld zones. Simulations predictions were successfully compared to experimental FSW tests. It is believed that such a numerical model can be used to optimize FSW processing parameters to favor desirable defect free weld with better mechanical properties.

Keywords: Carbon Steels, DEFORM 3D, FEM, Friction stir welding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2549
1596 Challenges and Professional Perspectives for Pedagogy Undergraduates with Specific Learning Disability: A Greek Case Study

Authors: Tatiani D. Mousoura

Abstract:

Specific learning disability (SLD) in higher education has been partially explored in Greece so far. Moreover, opinions on professional perspectives for university students with SLD, is scarcely encountered in Greek research. The perceptions of the hidden character of SLD along with the university policy towards it and professional perspectives that result from this policy have been examined in the present research. This study has applied the paradigm of a Greek Tertiary Pedagogical Education Department (Early Childhood Education). Via mixed methods, data have been collected from different groups of people in the Pedagogical Department: students with SLD and without SLD, academic staff and administration staff, all of which offer the opportunity for triangulation of the findings. Qualitative methods include ten interviews with students with SLD and 15 interviews with academic staff and 60 hours of observation of the students with SLD. Quantitative methods include 165 questionnaires completed by third and fourth-year students and five questionnaires completed by the administration staff. Thematic analyses of the interviews’ data and descriptive statistics on the questionnaires’ data have been applied for the processing of the results. The use of medical terms to define and understand SLD was common in the student cohort, regardless of them having an SLD diagnosis. However, this medical model approach is far more dominant in the group of students without SLD who, by majority, hold misconceptions on a definitional level. The academic staff group seems to be leaning towards a social approach concerning SLD. According to them, diagnoses may lead to social exclusion. The Pedagogical Department generally endorses the principles of inclusion and complies with the provision of oral exams for students with SLD. Nevertheless, in practice, there seems to be a lack of regular academic support for these students. When such support does exist, it is only through individual initiatives. With regards to their prospective profession, students with SLD can utilize their personal experience, as well as their empathy; these appear to be unique weapons in their hands –in comparison with other educators− when it comes to teaching students in the future. In the Department of Pedagogy, provision towards SLD results sporadic, however the vision of an inclusive department does exist. Based on their studies and their experience, pedagogy students with SLD claim that they have an experiential internalized advantage for their future career as educators.

Keywords: Specific learning disability, dyslexia, pedagogy department, inclusion, professional role of SLDed educators, higher education, university policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 982
1595 EMOES: Eye Motion and Ocular Expression Simulator

Authors: Nicoletta Adamo-Villani, Gerardo Beni, Jeremy White

Abstract:

We introduce, a new interactive 3D simulation system of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). The system is comprised of: (1) a physiologically accurate parameterized 3D model of the eyes, eyelids, and eyebrow regions; and (2) a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D eye simulation system, created using state-of-the-art computer animation technology and 'optimized' for use with an interactive and web deliverable platform, is, to our knowledge, the most advanced/realistic available so far for applications to character animation and medical pedagogy.

Keywords: 3D animation, HCI, medical simulation, ocularmotion and expression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
1594 3D Simulator of Ocular Motion and Expression

Authors: Nicoletta Adamo-Villani, Gerardo Beni, Jeremy White

Abstract:

We introduce a new interactive 3D simulator of ocular motion and expressions suitable for: (1) character animation applications to game design, film production, HCI (Human Computer Interface), conversational animated agents, and virtual reality; (2) medical applications (ophthalmic neurological and muscular pathologies: research and education); and (3) real time simulation of unconscious cognitive and emotional responses (for use, e.g., in psychological research). Using state-of-the-art computer animation technology we have modeled and rigged a physiologically accurate 3D model of the eyes, eyelids, and eyebrow regions and we have 'optimized' it for use with an interactive and web deliverable platform. In addition, we have realized a prototype device for realtime control of eye motions and expressions, including unconsciously produced expressions, for application as in (1), (2), and (3) above. The 3D simulator of eye motion and ocular expression is, to our knowledge, the most advanced/realistic available so far for applications in character animation and medical pedagogy.

Keywords: 3D animation, HCI, medical simulation, ocularmotion and expression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2189
1593 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: Artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
1592 Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain

Authors: Hazem M. El-Bakry

Abstract:

In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.

Keywords: Fast Painting, Cross Correlation, Frequency Domain, Parallel Processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
1591 An Empirical Study of Taiwan-s Hospital Foundation Investment in Corporate Social Responsibility and Financial Performance

Authors: Hsiu-Pi Lin, Wen-Chen Huang, Hui-Fang Chen, Yan-Pin Ke

Abstract:

Corporate Social Responsibility (CSR) has become a new trend of business governance. Few research studies on CSR published in Taiwanese academia, especially for medical settings, we were interested in probing the relationship of CSR and financial performance in medical settings in Taiwan. The results illustrate that: (1) a time delay effect exists with a lag between CSR effort and its performance in the hospital foundation, (2) input into the internal domains of CSR will be helpful to improve employee productivity in the hospital foundation, and (3) input into the external domains of CSR will be helpful in improving financial performance in the hospital foundation. This study overviews CSR in the medical industry in Taiwan and the relationship of CSR and financial performance. Discussions of possible implications from the study results are applied to consult the CSR concept that will be transferred into a business strategy for the organization manager.

Keywords: Corporate Social Responsibility (CSR), financialperformance, hospital foundation,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2019
1590 Device for 3D Analysis of Basic Movements of the Lower Extremity

Authors: Jiménez Villanueva Mayra Alejandra, Ortíz Casallas Diana Carolina, Luengas Contreras Lely Adriana

Abstract:

This document details the process of developing a wireless device that captures the basic movements of the foot (plantar flexion, dorsal flexion, abduction, adduction.), and the knee movement (flexion). It implements a motion capture system by using a hardware based on optical fiber sensors, due to the advantages in terms of scope, noise immunity and speed of data transmission and reception. The operating principle used by this system is the detection and transmission of joint movement by mechanical elements and their respective measurement by optical ones (in this case infrared). Likewise, Visual Basic software is used for reception, analysis and signal processing of data acquired by the device, generating a 3D graphical representation in real time of each movement. The result is a boot in charge of capturing the movement, a transmission module (Implementing Xbee Technology) and a receiver module for receiving information and sending it to the PC for their respective processing. The main idea with this device is to help on topics such as bioengineering and medicine, by helping to improve the quality of life and movement analysis.

Keywords: abduction, adduction, A / D converter, Autodesk 3DMax, Infrared Diode, Driver, extension, flexion, Infrared LEDs, Interface, Modeling OPENGL, Optical Fiber, USB CDC(Communications Device Class), Virtual Reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
1589 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran

Authors: M. Ahmadi, M. Kafil, H. Ebrahimi

Abstract:

Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.

Keywords: Broken bar, condition monitoring, diagnostics, empirical mode decomposition, Fourier transform, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758
1588 Exploring the Application of Knowledge Management Factors in Esfahan University's Medical College

Authors: Alireza Shirvani, Shadi Ebrahimi Mehrabani

Abstract:

In this competitive age, one of the key tools of most successful organizations is knowledge management. Today some organizations measure their current knowledge and use it as an indicator for rating the organization on their reports. Noting that the universities and colleges of medical science have a great role in public health of societies, their access to newest scientific research and the establishment of organizational knowledge management systems is very important. In order to explore the Application of Knowledge Management Factors, a national study was undertaken. The main purpose of this study was to find the rate of the application of knowledge management factors and some ways to establish more application of knowledge management system in Esfahan University-s Medical College (EUMC). Esfahan is the second largest city after Tehran, the capital city of Iran, and the EUMC is the biggest medical college in Esfahan. To rate the application of knowledge management, this study uses a quantitative research methodology based on Probst, Raub and Romhardt model of knowledge management. A group of 267 faculty members and staff of the EUMC were asked via questionnaire. Finding showed that the rate of the application of knowledge management factors in EUMC have been lower than average. As a result, an interview with ten faculty members conducted to find the guidelines to establish more applications of knowledge management system in EUMC.

Keywords: Knowledge, knowledge management, knowledge management factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
1587 Implementing a Visual Servoing System for Robot Controlling

Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari

Abstract:

Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.

Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
1586 Intelligent Heart Disease Prediction System Using CANFIS and Genetic Algorithm

Authors: Latha Parthiban, R. Subramanian

Abstract:

Heart disease (HD) is a major cause of morbidity and mortality in the modern society. Medical diagnosis is an important but complicated task that should be performed accurately and efficiently and its automation would be very useful. All doctors are unfortunately not equally skilled in every sub specialty and they are in many places a scarce resource. A system for automated medical diagnosis would enhance medical care and reduce costs. In this paper, a new approach based on coactive neuro-fuzzy inference system (CANFIS) was presented for prediction of heart disease. The proposed CANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach which is then integrated with genetic algorithm to diagnose the presence of the disease. The performances of the CANFIS model were evaluated in terms of training performances and classification accuracies and the results showed that the proposed CANFIS model has great potential in predicting the heart disease.

Keywords: CANFIS, genetic algorithms, heart disease, membership function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3945
1585 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1279
1584 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
1583 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script

Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim

Abstract:

A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.

Keywords: Butterfly valve, fluid-structure interaction, automatic CFD analysis, flow coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
1582 Brain MRI Segmentation and Lesions Detection by EM Algorithm

Authors: Mounira Rouaïnia, Mohamed Salah Medjram, Noureddine Doghmane

Abstract:

In Multiple Sclerosis, pathological changes in the brain results in deviations in signal intensity on Magnetic Resonance Images (MRI). Quantitative analysis of these changes and their correlation with clinical finding provides important information for diagnosis. This constitutes the objective of our work. A new approach is developed. After the enhancement of images contrast and the brain extraction by mathematical morphology algorithm, we proceed to the brain segmentation. Our approach is based on building statistical model from data itself, for normal brain MRI and including clustering tissue type. Then we detect signal abnormalities (MS lesions) as a rejection class containing voxels that are not explained by the built model. We validate the method on MR images of Multiple Sclerosis patients by comparing its results with those of human expert segmentation.

Keywords: EM algorithm, Magnetic Resonance Imaging, Mathematical morphology, Markov random model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
1581 Nanoparticles-Protein Hybrid Based Magnetic Liposome

Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek

Abstract:

Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI). The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III) / Iron (II) chloride with the leaf extract of Datura inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.

Keywords: Nanoparticles-Protein Hybrid, Magnetic Liposome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3000
1580 An Interactive Web-based Simulation Tool for Surgical Thread

Authors: A. Ruimi, S. Goyal, B. M. Nour

Abstract:

Interactive web-based computer simulations are needed by the medical community to replicate the experience of surgical procedures as closely and realistically as possible without the need to practice on corpses, animals and/or plastic models. In this paper, we offer a review on current state of the research on simulations of surgical threads, identify future needs and present our proposed plans to meet them. Our goal is to create a physics-based simulator, which will predict the behavior of surgical thread when subjected to conditions commonly encountered during surgery. To that end, we will i) develop three dimensional finite element models based on the Cosserat theory of elasticity ii) test and feedback results with the medical community and iii) develop a web-based user interface to run/command our simulator and visualize the results. The impacts of our research are that i) it will contribute to the development of a new generation of training for medical school students and ii) the simulator will be useful to expert surgeons in developing new, better and less risky procedures.

Keywords: Cosserat rod-theory, FEM simulations, Modeling, Surgical thread.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
1579 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3119
1578 Improving Quality of Business Networks for Information Systems

Authors: Hazem M. El-Bakry, Ahmed Atwan

Abstract:

Computer networks are essential part in computerbased information systems. The performance of these networks has a great influence on the whole information system. Measuring the usability criteria and customers satisfaction on small computer network is very important. In this article, an effective approach for measuring the usability of business network in an information system is introduced. The usability process for networking provides us with a flexible and a cost-effective way to assess the usability of a network and its products. In addition, the proposed approach can be used to certify network product usability late in the development cycle. Furthermore, it can be used to help in developing usable interfaces very early in the cycle and to give a way to measure, track, and improve usability. Moreover, a new approach for fast information processing over computer networks is presented. The entire data are collected together in a long vector and then tested as a one input pattern. Proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Usability Criteria, Computer Networks, Fast Information Processing, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
1577 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 549