Search results for: contour crafting
183 Aerodynamic Design of Axisymmetric Supersonic Nozzle Used by an Optimization Algorithm
Authors: Mohammad Mojtahedpoor
Abstract:
In this paper, it has been studied the method of optimal design of the supersonic nozzle. It could make viscous axisymmetric nozzles that the quality of their outlet flow is quite desired. In this method, it is optimized the divergent nozzle, at first. The initial divergent nozzle contour is designed through the method of characteristics and adding a suitable boundary layer to the inviscid contour. After that, it is made a proper grid and then simulated flow by the numerical solution and AUSM+ method by using the operation boundary condition. At the end, solution outputs are investigated and optimized. The numerical method has been validated with experimental results. Also, in order to evaluate the effectiveness of the present method, the nozzles compared with the previous studies. The comparisons show that the nozzles obtained through this method are sufficiently better in some conditions, such as the flow uniformity, size of the boundary layer, and obtained an axial length of the nozzle. Designing the convergent nozzle part affects by flow uniformity through changing its axial length and input diameter. The results show that increasing the length of the convergent part improves the output flow uniformity.Keywords: nozzle, supersonic, optimization, characteristic method, CFD
Procedia PDF Downloads 200182 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images
Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu
Abstract:
The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.Keywords: level set model, multi-temporal image, lake contour extraction, contour update
Procedia PDF Downloads 366181 Exploring the Relationship between Organisational Identity and Value Systems: Reflecting on the Values-Crafting Process in a Multi-National Organisation within the Entertainment Industry
Authors: Dieter Veldsman, Theo Heyns Veldsman
Abstract:
The knowledge economy demands an organisation that is flexible, adaptable and able to navigate the ever-changing environment. This fast-paced environment has however resulted in an organizational landscape that battles to engage employees, retain top talent and create meaningful work for its members. In the knowledge economy, the concept of organizational identity has become an important consideration as organisations aim to create a compelling and inviting narrative for all stakeholders across the business value chain. Values are often seen as the behavioural framework that informs organisational culture, yet often values are perceived to be inauthentic and misaligned with the true character or identity of the organisation and how it is perceived by different role players. This paper focuses on exploring the relationship between organisational identity and value systems by focusing on a case study within a multi-national organisation within South Africa. The paper evaluates the implementation of mixed methods OD approach that gathered collaborative inputs of more than 4500 employees who participated in crafting the newly established values system post a retrenchment process. The paper will evaluate the relationship between the newly crafted value system and the identity of the organisation as described by various internal and external stakeholders in order to explore potential alignment, dissonance and key insights into understanding the relationship between organisational identity and values. The case study will be reported from the perspective of an OD consultant who supported the transformation process over a period of 8 months and aims to provide key insights into values and identity alignment within knowledge economy organisations. From a practical perspective, the paper provides insights into how values are created, perceived and lived within organisations and the impact on employee engagement and culture.Keywords: culture, organisational development, organisational identity, values
Procedia PDF Downloads 310180 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning
Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin
Abstract:
This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing
Procedia PDF Downloads 25179 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition
Authors: Latha Subbiah, Dhanalakshmi Samiappan
Abstract:
In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.Keywords: curvelet, decomposition, levelset, ultrasound
Procedia PDF Downloads 340178 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, Bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 445177 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 420176 Visual, Zoological Metaphors and 'Urtiin Duu' (Long Song) in Alshaa, Inner Mongolia
Authors: Oyuna Weina
Abstract:
This study examines how musicians use visual and zoological metaphors for singing technique and voice quality in a genre of traditional music called urtiin duu (‘long song’) in Alshaa, Inner Mongolia, China. Previous studies have discussed melodic contour in Mongol music, but little study of the intersection of singing technique, visual and zoological metaphors has yet been undertaken. The purpose of this study is to address this lack by analysing urtiin duu itself, traditional pedagogy and performances, all of which have been inspired and are assessed by reference to nature and mobile pastoral herding practices. This study investigates the visual and zoological metaphors related to urtiin duu especially colour, the shape of the circle and animals in the Mongol community. Urtiin duu singing is associated with certain colours in song texts, in selection of repertoire and in the status of singers. Musicians also use colour to describe timbre. These colours in turn reference worship of nature, religions, and daily practices of most Mongols in Alshaa. Moreover, voice quality and singing technique are often related to the animals not only in song text but also in the approach to breathing and to melodic contour. Additionally, the concept of boronhoi (‘the shape of circle’), not only is applied to the melodic contour but also to the voice quality and singing technique. These three factors illustrate the connections among nature, spiritual world and everyday herding life of Mongols. These different connections provide evidence of multi-layered meanings. In contemporary Alshaa, urtiin duu singers received Western musical training from the city and returned to their homelands to perform urtiin duu. In doing so, they are also trying to reconnect with the history, nature and spiritual world in order to achieve their ideal sound. Within a multicultural society, singers negotiate amongst themselves, and with ethnic groups, audiences and government officials. The power of the metaphor therefore assists and reconnects the strength of regional identity and ethnic identity in Alshaa.Keywords: Alshaa, urtiin duu, visual, zoological metaphors
Procedia PDF Downloads 363175 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky
Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio
Abstract:
This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.Keywords: contour orientation histogram, meteors, night sky, RSC neural classifier, stars
Procedia PDF Downloads 138174 Captives on the Frontier: An Exploration of National Identity in Argentine Literature and Art
Authors: Carlos Riobo
Abstract:
This paper analyzes literature and art in Argentina from the nineteenth to the twenty-first centuries as these media used the figure of the white female captive to define a developing national identity. This identity excluded the Indians whose lands the whites were taking and who appeared as the aggressors and captors in writing and paintings. The paper identifies the complicit relationship between art and history in crafting national memory. It also identifies a movement toward purity (as defined by separation of entities) and away from mestizaje (racial and cultural mixtures).Keywords: Argentina, borders, captives, literature, painting
Procedia PDF Downloads 163173 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 82172 Cost and Benefits of Collocation in the Use of Biogas to Reduce Vulnerabilities and Risks
Authors: Janaina Camile Pasqual Lofhagen, David Savarese, Veronika Vazhnik
Abstract:
The urgency of the climate crisis requires both innovation and practicality. The energy transition framework allows industry to deliver resilient cities, enhance adaptability to change, pursue energy objectives such as growth or efficiencies, and increase renewable energy. This paper investigates a real-world application perspective for the use of biogas in Brazil and the U.S.. It will examine interventions to provide a foundation of infrastructure, as well as the tangible benefits for policy-makers crafting law and providing incentives.Keywords: resilience, vulnerability, risks, biogas, sustainability.
Procedia PDF Downloads 105171 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior
Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang
Abstract:
Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method
Procedia PDF Downloads 313170 Research of Strong-Column-Weak-Beam Criteria of Reinforced Concrete Frames Subjected to Biaxial Seismic Excitation
Authors: Chong Zhang, Mu-Xuan Tao
Abstract:
In several earthquakes, numerous reinforced concrete (RC) frames subjected to seismic excitation demonstrated a collapse pattern characterized by column hinges, though designed according to the Strong-Column-Weak-Beam (S-C-W-B) criteria. The effect of biaxial seismic excitation on the disparity between design and actual performance is carefully investigated in this article. First, a modified load contour method is proposed to derive a closed-form equation of biaxial bending moment strength, which is verified by numerical and experimental tests. Afterwards, a group of time history analyses of a simple frame modeled by fiber beam-column elements subjected to biaxial seismic excitation are conducted to verify that the current S-C-W-B criteria are not adequate to prevent the occurrence of column hinges. A biaxial over-strength factor is developed based on the proposed equation, and the reinforcement of columns is appropriately amplified with this factor to prevent the occurrence of column hinges under biaxial excitation, which is proved to be effective by another group of time history analyses.Keywords: biaxial bending moment capacity, biaxial seismic excitation, fiber beam model, load contour method, strong-column-weak-beam
Procedia PDF Downloads 99169 Challenges and Opportunities in Computing Logistics Cost in E-Commerce Supply Chain
Authors: Pramod Ghadge, Swadesh Srivastava
Abstract:
Revenue generation of a logistics company depends on how the logistics cost of a shipment is calculated. Logistics cost of a shipment is a function of distance & speed of the shipment travel in a particular network, its volumetric size and dead weight. Logistics billing is based mainly on the consumption of the scarce resource (space or weight carrying capacity of a carrier). Shipment’s size or deadweight is a function of product and packaging weight, dimensions and flexibility. Hence, to arrive at a standard methodology to compute accurate cost to bill the customer, the interplay among above mentioned physical attributes along with their measurement plays a key role. This becomes even more complex for an ecommerce company, like Flipkart, which caters to shipments from both warehouse and marketplace in an unorganized non-standard market like India. In this paper, we will explore various methodologies to define a standard way of billing the non-standard shipments across a wide range of size, shape and deadweight. Those will be, usage of historical volumetric/dead weight data to arrive at a factor which can be used to compute the logistics cost of a shipment, also calculating the real/contour volume of a shipment to address the problem of irregular shipment shapes which cannot be solved by conventional bounding box volume measurements. We will also discuss certain key business practices and operational quality considerations needed to bring standardization and drive appropriate ownership in the ecosystem.Keywords: contour volume, logistics, real volume, volumetric weight
Procedia PDF Downloads 269168 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach
Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid
Abstract:
The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm
Procedia PDF Downloads 495167 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 389166 Analysis of Threats in Interoperability of Medical Devices
Authors: M. Sandhya, R. M. Madhumitha, Sharmila Sankar
Abstract:
Interoperable medical devices (IMDs) face threats due to the increased attack surface accessible by interoperability and the corresponding infrastructure. Initiating networking and coordination functionalities primarily modify medical systems' security properties. Understanding the threats is a vital first step in ultimately crafting security solutions for such systems. The key to this problem is coming up with some common types of threats or attacks with those of security and privacy, and providing this information as a roadmap. This paper analyses the security issues in interoperability of devices and presents the main types of threats that have to be considered to build a secured system.Keywords: interoperability, threats, attacks, medical devices
Procedia PDF Downloads 333165 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution
Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques
Abstract:
The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)
Procedia PDF Downloads 296164 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method
Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption
Procedia PDF Downloads 518163 Estimation of Natural Pozzolan Reserves in the Volcanic Province of the Moroccan Middle Atlas Using a Geographic Information System in Order to Valorize Them
Authors: Brahim Balizi, Ayoub Aziz, Abdelilah Bellil, Abdellali El Khadiri, Jamal Mabrouki
Abstract:
Mio-polio-quaternary volcanism of the Tabular Middle Atlas, which corresponds to prospective levels of exploitable usable raw minerals, is a feature of Morocco's Middle Atlas, especially the Azrou-Timahdite region. Given their importance in national policy in terms of human development by supporting the sociological and economic component, this area has consequently been the focus of various research and prospecting of these levels in order to develop these reserves. The outcome of this labor is a massive amount of data that needs to be managed appropriately because it comes from multiple sources and formats, including side points, contour lines, geology, hydrogeology, hydrology, geological and topographical maps, satellite photos, and more. In this regard, putting in place a Geographic Information System (GIS) is essential to be able to offer a side plan that makes it possible to see the most recent topography of the area being exploited, to compute the volume of exploitation that occurs every day, and to make decisions with the fewest possible restrictions in order to use the reserves for the realization of ecological light mortars The three sites' mining will follow the contour lines in five steps that are six meters high and decline. It is anticipated that each quarry produces about 90,000 m3/year. For a single quarry, this translates to a daily production of about 450 m3 (200 days/year). About 3,540,240 m3 and 10,620,720 m3, respectively, represent the possible net exploitable volume in place for a single quarry and the three exploitable zones.Keywords: GIS, topography, exploitation, quarrying, lightweight mortar
Procedia PDF Downloads 26162 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm
Authors: Belgherbi Aicha, Bessaid Abdelhafid
Abstract:
In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm
Procedia PDF Downloads 325161 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286160 Biophotovoltaics in 3D: Simplifying Concepts
Authors: Mary Booth
Abstract:
Biophotovoltaics is a method of green energy generation derived from exposing plants to lights. Its vast potential is hampered by the public’s relative ignorance of its existence. This work aims to formalize the principles of the physical processes of biophotovoltaics into a comprehensible visual software model, thus amplifying the human thought process. The methods used involve initially crafting a scale model of a working biophotovoltaic system from household materials inspired by the work of Paolo Bombelli. The scale model is then programmed into a system-level simulation, wherein a 3D animation dissects the system and its general energy generation process. The completed 3D system-level simulation ultimately creates a simplified visual understanding of the complex principles of the biophotovoltaic system.Keywords: 3D, biophotovoltaics, render
Procedia PDF Downloads 81159 Design and Development of Bar Graph Data Visualization in 2D and 3D Space Using Front-End Technologies
Authors: Sourabh Yaduvanshi, Varsha Namdeo, Namrata Yaduvanshi
Abstract:
This study delves into the design and development intricacies of crafting detailed 2D bar charts via d3.js, recognizing its limitations in generating 3D visuals within the Document Object Model (DOM). The study combines three.js with d3.js, facilitating a smooth evolution from 2D to immersive 3D representations. This fusion epitomizes the synergy between front-end technologies, expanding horizons in data visualization. Beyond technical expertise, it symbolizes a creative convergence, pushing boundaries in visual representation. The abstract illuminates methodologies, unraveling the intricate integration of this fusion and guiding enthusiasts. It narrates a compelling story of transcending 2D constraints, propelling data visualization into captivating three-dimensional realms, and igniting creativity in front-end visualization endeavors.Keywords: design, development, front-end technologies, visualization
Procedia PDF Downloads 34158 Between Kenzo Tange and Fernando Távora: An ‘Affinitarian’ Architectural Regard
Authors: João Cepeda
Abstract:
In crafting their way between theory and practice, authors and artists seem to be always immersed in a never-ending process of relating epochs, objects, and images. Endless ‘affinities’ emerge from a somewhat unexplainable (and intimate) magnetic relation. It is through this ‘warburgian’ assessment that two of the most prominent twentieth-century modern architects from Japan and Portugal are put into perspective, focusing on their paths and thinking-practice, and on the research of their personal and professional archives. Moreover, this research especially aims its focus at essaying specifically on the possible ‘affinities’ between two of their most renowned architectural projects: the Kenzo Tange’s (demolished) Villa Seijo project in Tokyo (Japan) and Fernando Távora’s Tennis Pavilion design in Matosinhos (Portugal), respectively, side-by-side – through in-depth fieldwork in the sites, bibliographical and archival research, (unprecedented) material analysis, and final critical consideration.Keywords: Tange, Távora, architecture, affinities
Procedia PDF Downloads 66157 Prosodic Transfer in Foreign Language Learning: A Phonetic Crosscheck of Intonation and F₀ Range between Italian and German Native and Non-Native Speakers
Authors: Violetta Cataldo, Renata Savy, Simona Sbranna
Abstract:
Background: Foreign Language Learning (FLL) is characterised by prosodic transfer phenomena regarding pitch accents placement, intonation patterns, and pitch range excursion from the learners’ mother tongue to their Foreign Language (FL) which suggests that the gradual development of general linguistic competence in FL does not imply an equally correspondent improvement of the prosodic competence. Topic: The present study aims to monitor the development of prosodic competence of learners of Italian and German throughout the FLL process. The primary object of this study is to investigate the intonational features and the f₀ range excursion of Italian and German from a cross-linguistic perspective; analyses of native speakers’ productions point out the differences between this pair of languages and provide models for the Target Language (TL). A following crosscheck compares the L2 productions in Italian and German by non-native speakers to the Target Language models, in order to verify the occurrence of prosodic interference phenomena, i.e., type, degree, and modalities. Methodology: The subjects of the research are university students belonging to two groups: Italian native speakers learning German as FL and German native speakers learning Italian as FL. Both of them have been divided into three subgroups according to the FL proficiency level (beginners, intermediate, advanced). The dataset consists of wh-questions placed in situational contexts uttered in both speakers’ L1 and FL. Using a phonetic approach, analyses have considered three domains of intonational contours (Initial Profile, Nuclear Accent, and Terminal Contour) and two dimensions of the f₀ range parameter (span and level), which provide a basis for comparison between L1 and L2 productions. Findings: Results highlight a strong presence of prosodic transfer phenomena affecting L2 productions in the majority of both Italian and German learners, irrespective of their FL proficiency level; the transfer concerns all the three domains of the contour taken into account, although with different modalities and characteristics. Currently, L2 productions of German learners show a pitch span compression on the domain of the Terminal Contour compared to their L1 towards the TL; furthermore, German learners tend to use lower pitch range values in deviation from their L1 when improving their general linguistic competence in Italian FL proficiency level. Results regarding pitch range span and level in L2 productions by Italian learners are still in progress. At present, they show a similar tendency to expand the pitch span and to raise the pitch level, which also reveals a deviation from the L1 possibly in the direction of German TL. Conclusion: Intonational features seem to be 'resistant' parameters to which learners appear not to be particularly sensitive. By contrast, they show a certain sensitiveness to FL pitch range dimensions. Making clear which the most resistant and the most sensitive parameters are when learning FL prosody could lay groundwork for the development of prosodic trainings thanks to which learners could finally acquire a clear and natural pronunciation and intonation.Keywords: foreign language learning, German, Italian, L2 prosody, pitch range, transfer
Procedia PDF Downloads 286156 Mechanisms and Process of an Effective Public Policy Formulation in Islamic Economic System
Authors: Md Abu Saieed
Abstract:
Crafting and implementing public policy is one of the indispensable works in any form of state and government. But the policy objectives, methods of formulation and tools of implementation might be different based on the ideological nature, historical legacy, structure and capacity of administration and management and other push and factors. Public policy in Islamic economic system needs to be based on the key guidelines of divine scriptures along with other sources of sharia’h. As a representative of Allah (SWT), the governor and other apparatus of the state will formulate and implement public policies which will enable to establish a true welfare state based on justice, equity and equality. The whole life of Prophet Muhammad (pbuh) and his policy in operating state of affairs in Madina is the practical guidelines for the policy actors and professionals in Islamic system of economics. Moreover, policy makers need to be more meticulous in formulating Islamic public policy which meets the needs and demands of contemporary worlds as well.Keywords: formulation, Islam, public policy, policy factors, Sharia’h
Procedia PDF Downloads 351155 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 350154 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories
Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan
Abstract:
In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.Keywords: basketball, computer vision, image processing, convolutional neural network
Procedia PDF Downloads 153