Search results for: CT artifacts
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 84

Search results for: CT artifacts

24 Compressive Properties of a Synthetic Bone Substitute for Vertebral Cancellous Bone

Authors: H. N. Mehmanparast, J.M. Mac-Thiong., Y. Petit

Abstract:

Transpedicular screw fixation in spinal fractures, degenerative changes, or deformities is a well-established procedure. However, important rate of fixation failure due to screw bending, loosening, or pullout are still reported particularly in weak bone stock in osteoporosis. To overcome the problem, mechanism of failure has to be fully investigated in vitro. Post-mortem human subjects are less accessible and animal cadavers comprise limitations due to different geometry and mechanical properties. Therefore, the development of a synthetic model mimicking the realistic human vertebra is highly demanded. A bone surrogate, composed of Polyurethane (PU) foam analogous to cancellous bone porous structure, was tested for 3 different densities in this study. The mechanical properties were investigated under uniaxial compression test by minimizing the end artifacts on specimens. The results indicated that PU foam of 0.32 g.cm-3 density has comparable mechanical properties to human cancellous bone in terms of young-s modulus and yield strength. Therefore, the obtained information can be considered as primary step for developing a realistic cancellous bone of human vertebral body. Further evaluations are also recommended for other density groups.

Keywords: Cancellous bone, Pedicle screw, Polyurethane foam, Synthetic bone

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2995
23 Proposing Robotics Challenge Centered on Material Transportation in Smart Manufacturing

Authors: Brehme D’napoli Reis de Mesquita, Marcus Vin´ıcius de Souza Almeida, Caio Vin´ıcius Silva do Carmo

Abstract:

Educational robotics has emerged as a pedagogical tool, utilizing technological artifacts to engage students’ curiosity and interest. It fosters active learning of STEM education competencies while also cultivating essential behavioral skills. Robotic competitions provide students with platforms to collaboratively devise diverse solutions to shared problems, fostering experience exchange, collaboration, and personal growth. Despite the prevalence of current robotic competitions, especially in Brazil, simulating real-world challenges like natural disasters, there is a notable absence of industry-related tasks. This article presents an educational robotics initiative centered around material transportation within smart manufacturing using automated guided vehicles. The proposed robotics challenge was executed in a competition held in Ac¸ailˆandia city, Maranh˜ao, Brazil, yielding satisfactory results and inspiring teams to develop time-limited solution strategies.

Keywords: Educational robotics, STEM education, robotic competitions, material transportation, smart manufacturing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 193
22 Ontology-based Domain Modelling for Consistent Content Change Management

Authors: Muhammad Javed, Yalemisew M. Abgaz, Claus Pahl

Abstract:

Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces.

Keywords: Consistent Content Management, Impact Categorisation, Trace Model, Ontology Evolution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
21 Shape Restoration of the Left Ventricle

Authors: May-Ling Tan, Yi Su, Chi-Wan Lim, Liang Zhong, Ru-San Tan

Abstract:

This paper describes an automatic algorithm to restore the shape of three-dimensional (3D) left ventricle (LV) models created from magnetic resonance imaging (MRI) data using a geometry-driven optimization approach. Our basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration. A geometrical measure known as the Minimum Principle Curvature (κ2) is used to assess the smoothness of the LV. This measure is used to construct the objective function of a two-step optimization process. The objective of the optimization is to achieve a smooth epicardial shape by iterative in-plane translation of the MRI slices. Quantitatively, this yields a minimum sum in terms of the magnitude of κ 2, when κ2 is negative. A limited memory quasi-Newton algorithm, L-BFGS-B, is used to solve the optimization problem. We tested our algorithm on an in vitro theoretical LV model and 10 in vivo patient-specific models which contain significant motion artifacts. The results show that our method is able to automatically restore the shape of LV models back to smoothness without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings.

Keywords: Magnetic Resonance Imaging, Left Ventricle, ShapeRestoration, Principle Curvature, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
20 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: Multi-agent system, safety analysis, safety model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
19 RUPSec: An Extension on RUP for Developing Secure Systems - Requirements Discipline

Authors: Mohammad Reza Ayatollahzadeh Shirazi, Pooya Jaferian, Golnaz Elahi, Hamid Baghi, Babak Sadeghian

Abstract:

The world is moving rapidly toward the deployment of information and communication systems. Nowadays, computing systems with their fast growth are found everywhere and one of the main challenges for these systems is increasing attacks and security threats against them. Thus, capturing, analyzing and verifying security requirements becomes a very important activity in development process of computing systems, specially in developing systems such as banking, military and e-business systems. For developing every system, a process model which includes a process, methods and tools is chosen. The Rational Unified Process (RUP) is one of the most popular and complete process models which is used by developers in recent years. This process model should be extended to be used in developing secure software systems. In this paper, the Requirement Discipline of RUP is extended to improve RUP for developing secure software systems. These proposed extensions are adding and integrating a number of Activities, Roles, and Artifacts to RUP in order to capture, document and model threats and security requirements of system. These extensions introduce a group of clear and stepwise activities to developers. By following these activities, developers assure that security requirements are captured and modeled. These models are used in design, implementation and test activitie

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2758
18 Electro-Thermal Imaging of Breast Phantom: An Experimental Study

Authors: H. Feza Carlak, N. G. Gencer

Abstract:

To increase the temperature contrast in thermal images, the characteristics of the electrical conductivity and thermal imaging modalities can be combined. In this experimental study, it is objected to observe whether the temperature contrast created by the tumor tissue can be improved just due to the current application within medical safety limits. Various thermal breast phantoms are developed to simulate the female breast tissue. In vitro experiments are implemented using a thermal infrared camera in a controlled manner. Since experiments are implemented in vitro, there is no metabolic heat generation and blood perfusion. Only the effects and results of the electrical stimulation are investigated. Experimental study is implemented with two-dimensional models. Temperature contrasts due to the tumor tissues are obtained. Cancerous tissue is determined using the difference and ratio of healthy and tumor images. 1 cm diameter single tumor tissue causes almost 40 °mC temperature contrast on the thermal-breast phantom. Electrode artifacts are reduced by taking the difference and ratio of background (healthy) and tumor images. Ratio of healthy and tumor images show that temperature contrast is increased by the current application.

Keywords: Medical diagnostic imaging, breast phantom, active thermography, breast cancer detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
17 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: Resistivity, inversion, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5995
16 A Watermarking Scheme for MP3 Audio Files

Authors: Dimitrios Koukopoulos, Yiannis Stamatiou

Abstract:

In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.

Keywords: Audio watermarking, mpeg audio layer 3, hardinstance generation, NP-completeness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
15 An Efficient Watermarking Method for MP3 Audio Files

Authors: Dimitrios Koukopoulos, Yiannis Stamatiou

Abstract:

In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.

Keywords: Audio watermarking, mpeg audio layer 3, hard instance generation, NP-completeness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
14 A Semantic Registry to Support Brazilian Aeronautical Web Services Operations

Authors: Luís Antonio de Almeida Rodriguez, José Maria Parente de Oliveira, Ednelson Oliveira

Abstract:

In the last two decades, the world’s aviation authorities have made several attempts to create consensus about a global and accepted approach for applying semantics to web services registry descriptions. This problem has led communities to face a fat and disorganized infrastructure to describe aeronautical web services. It is usual for developers to implement ad-hoc connections among consumers and providers and manually create non-standardized service compositions, which need some particular approach to compose and semantically discover a desired web service. Current practices are not precise and tend to focus on lightweight specifications of some parts of the OWL-S and embed them into syntactic descriptions (SOAP artifacts and OWL language). It is necessary to have the ability to manage the use of both technologies. This paper presents an implementation of the ontology OWL-S that describes a Brazilian Aeronautical Web Service Registry, which makes it able to publish, advertise, make multi-criteria semantic discovery aligned with the ideas of the System Wide Information Management (SWIM) Program, and invoke web services within the Air Traffic Management context. The proposal’s best finding is a generic approach to describe semantic web services. The paper also presents a set of functional requirements to guide the ontology development and to compare them to the results to validate the implementation of the OWL-S Ontology.

Keywords: Aeronautical Web Services, OWL-S, Semantic Web Services Discovery, Ontologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81
13 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682
12 The Traditional Malay Textile (TMT)Knowledge Model: Transformation towards Automated Mapping

Authors: Syerina Azlin Md Nasir, Nor Laila Md Noor, Suriyati Razali

Abstract:

The growing interest on national heritage preservation has led to intensive efforts on digital documentation of cultural heritage knowledge. Encapsulated within this effort is the focus on ontology development that will help facilitate the organization and retrieval of the knowledge. Ontologies surrounding cultural heritage domain are related to archives, museum and library information such as archaeology, artifacts, paintings, etc. The growth in number and size of ontologies indicates the well acceptance of its semantic enrichment in many emerging applications. Nowadays, there are many heritage information systems available for access. Among others is community-based e-museum designed to support the digital cultural heritage preservation. This work extends previous effort of developing the Traditional Malay Textile (TMT) Knowledge Model where the model is designed with the intention of auxiliary mapping with CIDOC CRM. Due to its internal constraints, the model needs to be transformed in advance. This paper addresses the issue by reviewing the previous harmonization works with CIDOC CRM as exemplars in refining the facets in the model particularly involving TMT-Artifact class. The result is an extensible model which could lead to a common view for automated mapping with CIDOC CRM. Hence, it promotes integration and exchange of textile information especially batik-related between communities in e-museum applications.

Keywords: automated mapping, cultural heritage, knowledgemodel, textile practice

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2251
11 GC and GCxGC-MS Composition of Volatile Compounds from Carum carvi by Using Techniques Assisted by Microwaves

Authors: F. Benkaci-Ali, R. Mékaoui, G. Scholl, G. Eppe

Abstract:

The new methods as accelerated steam distillation assisted by microwave (ASDAM) is a combination of microwave heating and steam distillation, performed at atmospheric pressure at very short extraction time. Isolation and concentration of volatile compounds are performed by a single stage. (ASDAM) has been compared with (ASDAM) with cryogrinding of seeds (CG) and a conventional technique, hydrodistillation assisted by microwave (HDAM), hydro-distillation (HD) for the extraction of essential oil from aromatic herb as caraway and cumin seeds. The essential oils extracted by (ASDAM) for 1 min were quantitatively (yield) and qualitatively (aromatic profile) no similar to those obtained by ASDAM-CG (1 min) and HD (for 3 h). The accelerated microwave extraction with cryogrinding inhibits numerous enzymatic reactions as hydrolysis of oils. Microwave radiations constitute the adequate mean for the extraction operations from the yields and high content in major component majority point view, and allow to minimise considerably the energy consumption, but especially heating time too, which is one of essential parameters of artifacts formation. The ASDAM and ASDAM-CG are green techniques and yields an essential oil with higher amounts of more valuable oxygenated compounds comparable to the biosynthesis compounds, and allows substantial savings of costs, in terms of time, energy and plant material.

Keywords: Microwave, steam distillation, caraway, cumin, cryogrinding, GC-MS, GCxGC-MS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976
10 A Domain Specific Modeling Language Semantic Model for Artefact Orientation

Authors: Bunakiye R. Japheth, Ogude U. Cyril

Abstract:

Since the process of transforming user requirements to modeling constructs are not very well supported by domain-specific frameworks, it became necessary to integrate domain requirements with the specific architectures to achieve an integrated customizable solutions space via artifact orientation. Domain-specific modeling language specifications of model-driven engineering technologies focus more on requirements within a particular domain, which can be tailored to aid the domain expert in expressing domain concepts effectively. Modeling processes through domain-specific language formalisms are highly volatile due to dependencies on domain concepts or used process models. A capable solution is given by artifact orientation that stresses on the results rather than expressing a strict dependence on complicated platforms for model creation and development. Based on this premise, domain-specific methods for producing artifacts without having to take into account the complexity and variability of platforms for model definitions can be integrated to support customizable development. In this paper, we discuss methods for the integration capabilities and necessities within a common structure and semantics that contribute a metamodel for artifact-orientation, which leads to a reusable software layer with concrete syntax capable of determining design intents from domain expert. These concepts forming the language formalism are established from models explained within the oil and gas pipelines industry.

Keywords: Control process, metrics of engineering, structured abstraction, semantic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 690
9 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

Authors: T. P. Athira, Gibin Chacko George

Abstract:

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
8 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: Brain-computer interface, speech recognition, electroencephalography EEG, Wernicke area, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846
7 Igbo Art: A Reflection of the Igbo’s Visual Culture

Authors: David Osa-Egonwa

Abstract:

Visual culture is the expression of the norms and social behavior of a society in visual images. A reflection simply shows you how you look when you stand before a mirror, a clear water or stream. The mirror does not alter, improve or distort your original appearance, neither does it show you a caricature of what stands before it, this is the case with visual images created by a tribe or society. The ‘uli’ is hand drawn body design done on Igbo women and speaks of a culture of body adornment which is a practice that is appreciated by that tribe. The use of pattern of the gliding python snake ‘ije eke’ or ‘ijeagwo’ for wall painting speaks of the Igbo culture as one that appreciates wall paintings based on these patterns. Modern life came and brought a lot of change to the Igbo-speaking people of Nigeria. Change cloaked in the garment of Westernization has influenced the culture of the Igbos. This has resulted in a problem which is a break in the cultural practice that has also affected art produced by the Igbos. Before the colonial masters arrived and changed the established culture practiced by the Igbos, visual images were created that retained the culture of this people. To bring this point to limelight, this paper has adopted a historical method. A large number of works produced during pre and post-colonial era which range from sculptural pieces, paintings and other artifacts, just to mention a few, were studied carefully and it was discovered that the visual images hold the culture or aspects of the culture of the Igbos in their renditions and can rightly serve as a mirror of the Igbo visual culture.

Keywords: Artistic renditions, historical method, Igbo visual culture, changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
6 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel

Authors: J. Daba, J. Dubois

Abstract:

In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.

Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
5 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel

Authors: F. M. Pisano, M. Ciminello

Abstract:

Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.

Keywords: Interactive dashboards, optical fibers, structural health monitoring, visual analytics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
4 Combination of Different Classifiers for Cardiac Arrhythmia Recognition

Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari

Abstract:

This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.

Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2183
3 Analysis of a Faience Enema Found in the Assasif Tomb No. -28- of the Vizier Amenhotep Huy: Contributions to the Study of the Mummification Ritual Practiced in the Theban Necropolis

Authors: Alberto Abello Moreno-Cid

Abstract:

Mummification was the process through which immortality was granted to the deceased, so it was of extreme importance to the Egyptians. The techniques of embalming had evolved over the centuries, and specialists created increasingly sophisticated tools. However, due to its eminently religious nature, knowledge about everything related to this practice was jealously preserved, and the testimonies that have survived to our time are scarce. For this reason, embalming instruments found in archaeological excavations are uncommon. The tomb of the Vizier Amenhotep Huy (AT No. -28-), located in the el-Assasif necropolis that is being excavated since 2009 by the team of the Institute of Ancient Egyptian Studies, has been the scene of some discoveries of this type that evidences the existence of mummification practices in this place after the New Kingdom. The clysters or enemas are the fundamental tools in the second type of mummification described by the historian Herodotus to introduce caustic solutions inside the body of the deceased. Nevertheless, such objects only have been found in three locations: the tomb of Ankh-Hor in Luxor, where a copper enema belonged to the prophet of Ammon Uah-ib-Ra came to light; the excavation of the tomb of Menekh-ib-Nekau in Abusir, where was also found one made of copper; and the excavations in the Bucheum, where two more artifacts were discovered, also made of copper but in different shapes and sizes. Both of them were used for the mummification of sacred animals and this is the reason they vary significantly. Therefore, the object found in the tomb No. -28-, is the first known made of faience of all these peculiar tools and the oldest known until now, dated in the Third Intermediate Period (circa 1070-650 B.C.). This paper bases its investigation on the study of those parallelisms, the material, the current archaeological context and the full analysis and reconstruction of the object in question. The key point is the use of faience in the production of this item: creating a device intended to be in constant use seems to be a first illogical compared to other samples made of copper. Faience around the area of Deir el-Bahari had a strong religious component, associated with solar myths and principles of the resurrection, connected to the Osirian that characterises the mummification procedure. The study allows to refute some of the premises which are held unalterable in Egyptology, verifying the utilization of these sort of pieces, understanding its way of use and showing that this type of mummification was also applied to the highest social stratum, in which case the tools were thought out of an exceptional quality and religious symbolism.

Keywords: Clyster, el-Assasif, embalming, faience enema, mummification, Theban necropolis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624
2 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool

Authors: Taru Hakanen, Mervi Murtonen

Abstract:

This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and interfirm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDLbased business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.

Keywords: Boundary object, business model canvas, managerial tool, service-dominant logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3307
1 Migrant Women English Instructors’ Transformative Workplace Learning Experiences in Post-Secondary English Language Programs in Ontario, Canada

Authors: Justine Jun

Abstract:

This study aims to reveal migrant women English instructors' workplace learning experiences in Canadian post-secondary institutions in Ontario. Migrant women English instructors in higher education are an understudied group of teachers. This study employs a qualitative research paradigm. Mezirow’s Transformative Learning Theory is an essential lens for the researcher to explain, analyze, and interpret the research data. It is a collaborative research project. The researcher and participants cooperatively create photographic or other artwork data responding to the research questions. Photovoice and arts-informed data collection methodology are the main methods. Research participants engage in the study as co-researchers and inquire about their own workplace learning experiences, actively utilizing their critical self-reflective and dialogic skills. Co-researchers individually select the forms of artwork they prefer to engage with to represent their transformative workplace learning experiences about the Canadian workplace cultures that they underwent while working with colleagues and administrators in the workplace. Once the co-researchers generate their cultural artifacts as research data, they collaboratively interpret their artworks with the researcher and other volunteer co-researchers. Co-researchers jointly investigate the themes emerging from the artworks. They also interpret the meanings of their own and others’ workplace learning experiences embedded in the artworks through interactive one-on-one or group interviews. The following are the research questions that the migrant women English instructor participants examine and answer: (1) What have they learned about their workplace culture and how do they explain their learning experiences? (2) How transformative have their learning experiences been at work? (3) How have their colleagues and administrators influenced their transformative learning? (4) What kind of support have they received? What supports have been valuable to them and what changes would they like to see? (5) What have their learning experiences transformed? (6) What has this arts-informed research process transformed? The study findings implicate English language instructor support currently practiced in post-secondary English language programs in Ontario, Canada, especially for migrant women English instructors. This research is a doctoral empirical study in progress. This study has the urgency to address the research problem that few studies have investigated migrant English instructors’ professional learning and support issues in the workplace, precisely that of English instructors working with adult learners in Canada. While appropriate social and professional support for migrant English instructors is required throughout the country, the present workplace realities in Ontario's English language programs need to be heard soon. For that purpose, the conceptualization of this study is crucial. It makes the investigation of under-represented instructors’ under-researched social phenomena, workplace learning and support, viable and rigorous. This paper demonstrates the robust theorization of English instructors’ workplace experiences using Mezirow’s Transformative Learning Theory in the English language teacher education field. 

Keywords: English teacher education, professional learning, transformative learning theory, workplace learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564