Search results for: Discrete Fourier Transform (DFT)
81 A Business Model Design Process for Social Enterprises: The Critical Role of the Environment
Authors: Hadia Abdel Aziz, Raghda El Ebrashi
Abstract:
Business models are shaped by their design space or the environment they are designed to be implemented in. The rapidly changing economic, technological, political, regulatory and market external environment severely affects business logic. This is particularly true for social enterprises whose core mission is to transform their environments, and thus, their whole business logic revolves around the interchange between the enterprise and the environment. The context in which social business operates imposes different business design constraints while at the same time, open up new design opportunities. It is also affected to a great extent by the impact that successful enterprises generate; a continuous loop of interaction that needs to be managed through a dynamic capability in order to generate a lasting powerful impact. This conceptual research synthesizes and analyzes literature on social enterprise, social enterprise business models, business model innovation, business model design, and the open system view theory to propose a new business model design process for social enterprises that takes into account the critical role of environmental factors. This process would help the social enterprise develop a dynamic capability that ensures the alignment of its business model to its environmental context, thus, maximizing its probability of success.
Keywords: Social enterprise, business model, business model design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 303180 A Mapping Approach of Code Generation for Arinc653-Based Avionics Software
Authors: Lu Zou, Dianfu MA, Ying Wang, Xianqi Zhao
Abstract:
Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.Keywords: IMA, ARINC653, AADL653, code generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 303879 Pension Plan Member’s Investment Strategies with Transaction Cost and Couple Risky Assets Modelled by the O-U Process
Authors: Udeme O. Ini, Edikan E. Akpanibah
Abstract:
This paper studies the optimal investment strategies for a plan member (PM) in a defined contribution (DC) pension scheme with transaction cost, taxes on invested funds and couple risky assets (stocks) under the Ornstein-Uhlenbeck (O-U) process. The PM’s portfolio is assumed to consist of a risk-free asset and two risky assets where the two risky assets are driven by the O-U process. The Legendre transformation and dual theory is use to transform the resultant optimal control problem which is a nonlinear partial differential equation (PDE) into linear PDE and the resultant linear PDE is then solved for the explicit solutions of the optimal investment strategies for PM exhibiting constant absolute risk aversion (CARA) using change of variable technique. Furthermore, theoretical analysis is used to study the influences of some sensitive parameters on the optimal investment strategies with observations that the optimal investment strategies for the two risky assets increase with increase in the dividend and decreases with increase in tax on the invested funds, risk averse coefficient, initial fund size and the transaction cost.
Keywords: Ornstein-Uhlenbeck process, portfolio management, Legendre transforms, CARA utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47778 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall
Authors: Sanjib Kr Pal, S. Bhattacharyya
Abstract:
Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.Keywords: Entropy generation, mixed convection, conjugate heat transfer, numerical, nanofluid, wall waviness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104677 Automated Optic Disc Detection in Retinal Images of Patients with Diabetic Retinopathy and Risk of Macular Edema
Authors: Arturo Aquino, Manuel Emilio Gegundez, Diego Marin
Abstract:
In this paper, a new automated methodology to detect the optic disc (OD) automatically in retinal images from patients with risk of being affected by Diabetic Retinopathy (DR) and Macular Edema (ME) is presented. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using image contrast analysis and structure filtering techniques and, on the other hand, a boundary segmentation methodology estimates a circular approximation of the OD boundary by applying mathematical morphology, edge detection techniques and the Circular Hough Transform. The methodologies were tested on a set of 1200 images composed of 229 retinographies from patients affected by DR with risk of ME, 431 with DR and no risk of ME and 540 images of healthy retinas. The location methodology obtained 98.83% success rate, whereas the OD boundary segmentation methodology obtained good circular OD boundary approximation in 94.58% of cases. The average computational time measured over the total set was 1.67 seconds for OD location and 5.78 seconds for OD boundary segmentation.
Keywords: Diabetic retinopathy, macular edema, optic disc, automated detection, automated segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 279076 Robust Digital Cinema Watermarking
Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi
Abstract:
With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as “traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip, traceable watermark, Hash Function, CRC-32.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164775 Analysis of Vocal Fold Vibrations from High-Speed Digital Images Based On Dynamic Time Warping
Authors: A. I. A. Rahman, Sh-Hussain Salleh, K. Ahmad, K. Anuar
Abstract:
Analysis of vocal fold vibration is essential for understanding the mechanism of voice production and for improving clinical assessment of voice disorders. This paper presents a Dynamic Time Warping (DTW) based approach to analyze and objectively classify vocal fold vibration patterns. The proposed technique was designed and implemented on a Glottal Area Waveform (GAW) extracted from high-speed laryngeal images by delineating the glottal edges for each image frame. Feature extraction from the GAW was performed using Linear Predictive Coding (LPC). Several types of voice reference templates from simulations of clear, breathy, fry, pressed and hyperfunctional voice productions were used. The patterns of the reference templates were first verified using the analytical signal generated through Hilbert transformation of the GAW. Samples from normal speakers’ voice recordings were then used to evaluate and test the effectiveness of this approach. The classification of the voice patterns using the technique of LPC and DTW gave the accuracy of 81%.
Keywords: Dynamic Time Warping, Glottal Area Waveform, Linear Predictive Coding, High-Speed Laryngeal Images, Hilbert Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 233474 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis
Authors: Gaoyong Luo
Abstract:
The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202873 Effect of Modification and Expansion on Emergence of Cooperation in Demographic Multi-Level Donor-Recipient Game
Authors: Tsuneyuki Namekata, Yoko Namekata
Abstract:
It is known that the mean investment evolves from a very low initial value to some high level in the Continuous Prisoner's Dilemma. We examine how the cooperation level evolves from a low initial level to a high level in our Demographic Multi-level Donor-Recipient situation. In the Multi-level Donor-Recipient game, one player is selected as a Donor and the other as a Recipient randomly. The Donor has multiple cooperative moves and one defective move. A cooperative move means the Donor pays some cost for the Recipient to receive some benefit. The more cooperative move the Donor takes, the higher cost the Donor pays and the higher benefit the Recipient receives. The defective move has no effect on them. Two consecutive Multi-level Donor-Recipient games, one as a Donor and the other as a Recipient, can be viewed as a discrete version of the Continuous Prisoner's Dilemma. In the Demographic Multi-level Donor-Recipient game, players are initially distributed spatially. In each period, players play multiple Multi-level Donor-Recipient games against other players. He leaves offspring if possible and dies because of negative accumulated payoff of him or his lifespan. Cooperative moves are necessary for the survival of the whole population. There is only a low level of cooperative move besides the defective move initially available in strategies of players. A player may modify and expand his strategy by his recent experiences or practices. We distinguish several types of a player about modification and expansion. We show, by Agent-Based Simulation, that introducing only the modification increases the emergence rate of cooperation and introducing both the modification and the expansion further increases it and a high level of cooperation does emerge in our Demographic Multi-level Donor-Recipient Game.
Keywords: Agent-based simulation, donor-recipient game, emergence of cooperation, spatial structure, TFT, TF2T.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87572 Arabic Literature as a Tool for Educational Transformation in Nigeria
Authors: Abdulfatah A Raji
Abstract:
This paper started with the definitions of literature, Arabic literature, transformation and went further to highlight the components of educational transformation. The general history of Arabic literature was discussed with focus on how it undergoes some transformations from pre-Islamic period through Quranic era, Abbasid literature to renaissance period in which the modernization of Arabic literature started in Egypt. It also traces the spread of Arabic literature in Nigeria from the pre-colonial era during the Kanuri rulers to Jihad of Usman Dan Fodio and the development of literature which manifested to the Teacher’s Colleges and Bayero University in Northern Nigeria. Also, the establishment of primary and post-primary schools by Muslim organizations in many cities and towns of the Western part of Nigeria. Literary criticism was also discussed in line with Arabic literature. Poetry work of eminent poets were cited to show its importance in line with educational transformation in Nigerian literature and lessons from the cited Arabic poetry works were also highlighted to include: motivation to behave well and to tolerate others, better spirits of interaction, love and co-existence among different sexes, religion etc. All these can help in developing a better educational transformation in Nigeria which can in turn help in how to conduct researches for national development. The paper recommended compulsory Arabic literature at all levels of the nations’ educational system as well as publication of Arabic books and journals to encourage peace in this era of conflicts and further transform Nigeria’s educational system for better.Keywords: Arabic, literature, peace, development, Nigeria
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156871 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman
Abstract:
With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.Keywords: Band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117070 Introductory Design Optimisation of a Machine Tool using a Virtual Machine Concept
Authors: Johan Wall, Johan Fredin, Anders Jönsson, Göran Broman
Abstract:
Designing modern machine tools is a complex task. A simulation tool to aid the design work, a virtual machine, has therefore been developed in earlier work. The virtual machine considers the interaction between the mechanics of the machine (including structural flexibility) and the control system. This paper exemplifies the usefulness of the virtual machine as a tool for product development. An optimisation study is conducted aiming at improving the existing design of a machine tool regarding weight and manufacturing accuracy at maintained manufacturing speed. The problem can be categorised as constrained multidisciplinary multiobjective multivariable optimisation. Parameters of the control and geometric quantities of the machine are used as design variables. This results in a mix of continuous and discrete variables and an optimisation approach using a genetic algorithm is therefore deployed. The accuracy objective is evaluated according to international standards. The complete systems model shows nondeterministic behaviour. A strategy to handle this based on statistical analysis is suggested. The weight of the main moving parts is reduced by more than 30 per cent and the manufacturing accuracy is improvement by more than 60 per cent compared to the original design, with no reduction in manufacturing speed. It is also shown that interaction effects exist between the mechanics and the control, i.e. this improvement would most likely not been possible with a conventional sequential design approach within the same time, cost and general resource frame. This indicates the potential of the virtual machine concept for contributing to improved efficiency of both complex products and the development process for such products. Companies incorporating such advanced simulation tools in their product development could thus improve its own competitiveness as well as contribute to improved resource efficiency of society at large.Keywords: Machine tools, Mechatronics, Non-deterministic, Optimisation, Product development, Virtual machine
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196769 Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients
Authors: Dhurgham Al-Karawi, Nisreen Polus, Shakir Al-Zaidi, Sabah Jassim
Abstract:
The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.
Keywords: COVID-19, chest x-ray scan, artificial intelligence, texture analysis, local binary pattern transform, Gabor filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67768 Principle Components Updates via Matrix Perturbations
Authors: Aiman Elragig, Hanan Dreiwi, Dung Ly, Idriss Elmabrook
Abstract:
This paper highlights a new approach to look at online principle components analysis (OPCA). Given a data matrix X ∈ R,^m x n we characterise the online updates of its covariance as a matrix perturbation problem. Up to the principle components, it turns out that online updates of the batch PCA can be captured by symmetric matrix perturbation of the batch covariance matrix. We have shown that as n→ n0 >> 1, the batch covariance and its update become almost similar. Finally, utilize our new setup of online updates to find a bound on the angle distance of the principle components of X and its update.Keywords: Online data updates, covariance matrix, online principle component analysis (OPCA), matrix perturbation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 103867 Leveraging Hyperledger Iroha for the Issuance and Verification of Higher-Education Certificates
Authors: Vasiliki Vlachou, Christos Kontzinos, Ourania Markaki, Panagiotis Kokkinakos, Vagelis Karakolis, John Psarras
Abstract:
Higher Education is resisting the pull of technology, especially as this concerns the issuance and verification of degrees and certificates. It is widely known that education certificates are largely produced in paper form making them vulnerable to damage while holders of such certificates are dependent on the universities and other issuing organisations. QualiChain is an EU Horizon 2020 (H2020) research project aiming to transform and revolutionise the domain of public education and its ties with the job market by leveraging blockchain, analytics and decision support to develop a platform for the verification and sharing of education certificates. Blockchain plays an integral part in the QualiChain solution in providing a trustworthy environment to store, share and manage such accreditations. Under the context of this paper, three prominent blockchain platforms (Ethereum, Hyperledger Fabric, Hyperledger Iroha) were considered as a means of experimentation for creating a system with the basic functionalities that will be needed for trustworthy degree verification. The methodology and respective system developed and presented in this paper used Hyperledger Iroha and proved that this specific platform can be used to easily develop decentralize applications. Future papers will attempt to further experiment with other blockchain platforms and assess which has the best potential.
Keywords: Blockchain, degree verification, higher education certificates, Hyperledger Iroha.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83066 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81065 Transformability in Post-Earthquake Houses in Iran: with Special Focus on Lar City
Authors: M. Parva, K. Dola, F. Pour Rahimian
Abstract:
Earthquake is considered as one of the most catastrophic disasters in Iran, in terms of both short-term and long-term hazards. Due to the particular financial and time constraints in Iran, quickly constructed post-earthquake houses (PEHs) do not fulfill the minimum requirements to be considered as comfortable dwellings for people. Consequently, people often transform PEHs after they start to reside. However, lack of understanding about process, motivation, and results of housing transformation leads to construction of some houses not suitable for future transformations, hence resulting in eventually demolished or abandoned PEHs. This study investigated housing transformations in a natural bed of post-earthquake Lar. This paper reports results of the conducted survey for comparing normal condition housing transformation with post-earthquake housing transformation in order to reveal the factors that affect post-earthquake housing transformation in Iran. The findings proposed the use of a combination of ‘Temporary’ and ‘Permanent’ housing reconstruction models in Iran to provide victims with basic but permanent post-disaster dwellings. It is also suggested that needs for future transformation should be predicted and addressed during early stages of design and development. This study contributes to both research and practice regarding post-earthquake housing reconstruction in Iran by proposing new design approaches and guidelines.
Keywords: Housing transformation, Iran, Lar, post-earthquake housing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187764 Development of a Neural Network based Algorithm for Multi-Scale Roughness Parameters and Soil Moisture Retrieval
Authors: L. Bennaceur Farah, I. R. Farah, R. Bennaceur, Z. Belhadj, M. R. Boussema
Abstract:
The overall objective of this paper is to retrieve soil surfaces parameters namely, roughness and soil moisture related to the dielectric constant by inverting the radar backscattered signal from natural soil surfaces. Because the classical description of roughness using statistical parameters like the correlation length doesn't lead to satisfactory results to predict radar backscattering, we used a multi-scale roughness description using the wavelet transform and the Mallat algorithm. In this description, the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each having a spatial scale. A second step in this study consisted in adapting a direct model simulating radar backscattering namely the small perturbation model to this multi-scale surface description. We investigated the impact of this description on radar backscattering through a sensitivity analysis of backscattering coefficient to the multi-scale roughness parameters. To perform the inversion of the small perturbation multi-scale scattering model (MLS SPM) we used a multi-layer neural network architecture trained by backpropagation learning rule. The inversion leads to satisfactory results with a relative uncertainty of 8%.Keywords: Remote sensing, rough surfaces, inverse problems, SAR, radar scattering, Neural networks and Fractals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159563 A Game-Based Product Modelling Environment for Non-Engineer
Authors: Guolong Zhong, Venkatesh Chennam Vijay, Ilias Oraifige
Abstract:
In the last 20 years, Knowledge Based Engineering (KBE) has shown its advantages in product development in different engineering areas such as automation, mechanical, civil and aerospace engineering in terms of digital design automation and cost reduction by automating repetitive design tasks through capturing, integrating, utilising and reusing the existing knowledge required in various aspects of the product design. However, in primary design stages, the descriptive information of a product is discrete and unorganized while knowledge is in various forms instead of pure data. Thus, it is crucial to have an integrated product model which can represent the entire product information and its associated knowledge at the beginning of the product design. One of the shortcomings of the existing product models is a lack of required knowledge representation in various aspects of product design and its mapping to an interoperable schema. To overcome the limitation of the existing product model and methodologies, two key factors are considered. First, the product model must have well-defined classes that can represent the entire product information and its associated knowledge. Second, the product model needs to be represented in an interoperable schema to ensure a steady data exchange between different product modelling platforms and CAD software. This paper introduced a method to provide a general product model as a generative representation of a product, which consists of the geometry information and non-geometry information, through a product modelling framework. The proposed method for capturing the knowledge from the designers through a knowledge file provides a simple and efficient way of collecting and transferring knowledge. Further, the knowledge schema provides a clear view and format on the data that needed to be gathered in order to achieve a unified knowledge exchange between different platforms. This study used a game-based platform to make product modelling environment accessible for non-engineers. Further the paper goes on to test use case based on the proposed game-based product modelling environment to validate the effectiveness among non-engineers.
Keywords: Game-based learning, knowledge based engineering, product modelling, design automation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74262 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities
Authors: Sumathi Poobal, G. Ravindran
Abstract:
Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.Keywords: DCT, FIC, PIFS, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182461 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control
Authors: Rami N. Khushaba, Adel Al-Jumaily
Abstract:
The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173760 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features
Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung
Abstract:
This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388559 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning
Authors: Jean Berger, Mohamed Barkaoui
Abstract:
Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.
Keywords: Search path planning, false alarm, search-and-delivery, entropy, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196758 AI-Driven Cloud Security: Proactive Defense Against Evolving Cyber Threats
Authors: Ashly Joseph
Abstract:
Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the data, reputation, and financial stability of enterprises. Conventional security methods can become inadequate when confronted with ever intricate and dynamic threats. Artificial Intelligence (AI) technologies possess the capacity to significantly transform cloud security through their ability to promptly identify and thwart assaults, adjust to emerging risks, and offer intelligent perspectives for proactive security actions. The objective of this research study is to investigate the utilization of AI technologies in augmenting the security measures within cloud computing systems. This paper aims to offer significant insights and recommendations for businesses seeking to protect their cloud-based assets by analyzing the present state of cloud security, the capabilities of AI, and the possible advantages and obstacles associated with using AI into cloud security policies.
Keywords: Machine Learning, Natural Learning Processing, Denial-of-Service attacks, Sentiment Analysis, Cloud computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18557 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF
Authors: Karunakar A K, Manohara Pai M M
Abstract:
In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161956 A Robust Salient Region Extraction Based on Color and Texture Features
Authors: Mingxin Zhang, Zhaogan Lu, Junyi Shen
Abstract:
In current common research reports, salient regions are usually defined as those regions that could present the main meaningful or semantic contents. However, there are no uniform saliency metrics that could describe the saliency of implicit image regions. Most common metrics take those regions as salient regions, which have many abrupt changes or some unpredictable characteristics. But, this metric will fail to detect those salient useful regions with flat textures. In fact, according to human semantic perceptions, color and texture distinctions are the main characteristics that could distinct different regions. Thus, we present a novel saliency metric coupled with color and texture features, and its corresponding salient region extraction methods. In order to evaluate the corresponding saliency values of implicit regions in one image, three main colors and multi-resolution Gabor features are respectively used for color and texture features. For each region, its saliency value is actually to evaluate the total sum of its Euclidean distances for other regions in the color and texture spaces. A special synthesized image and several practical images with main salient regions are used to evaluate the performance of the proposed saliency metric and other several common metrics, i.e., scale saliency, wavelet transform modulus maxima point density, and important index based metrics. Experiment results verified that the proposed saliency metric could achieve more robust performance than those common saliency metrics.Keywords: salient regions, color and texture features, image segmentation, saliency metric
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156755 The Influence of Travel Experience within Perceived Public Transport Quality
Authors: Armando Cartenì, Ilaria Henke
Abstract:
The perceived public transport quality is an important driver that influences both customer satisfaction and mobility choices. The competition among transport operators needs to improve the quality of the services and identify which attributes are perceived as relevant by passengers. Among the “traditional” public transport quality attributes there are, for example: travel and waiting time, regularity of the services, and ticket price. By contrast, there are some “non-conventional” attributes that could significantly influence customer satisfaction jointly with the “traditional” ones. Among these, the beauty/aesthetics of the transport terminals (e.g. rail station and bus terminal) is probably one of the most impacting on user perception. Starting from these considerations, the point stressed in this paper was if (and how munch) the travel experience of the overall travel (e.g. how long is the travel, how many transport modes must be used) influences the perception of the public transport quality. The aim of this paper was to investigate the weight of the terminal quality (e.g. aesthetic, comfort and service offered) within the overall travel experience. The case study was the extra-urban Italian bus network. The passengers of the major Italian terminal bus were interviewed and the analysis of the results shows that about the 75% of the travelers, are available to pay up to 30% more for the ticket price for having a high quality terminal. A travel experience effect was observed: the average perceived transport quality varies with the characteristic of the overall trip. The passengers that have a “long trip” (travel time greater than 2 hours) perceived as “low” the overall quality of the trip even if they pass through a high quality terminal. The opposite occurs for the “short trip” passengers. This means that if a traveler passes through a high quality station, the overall perception of that terminal could be significantly reduced if he is tired from a long trip. This result is important and if confirmed through other case studies, will allow to conclude that the “travel experience impact" must be considered as an explicit design variable for public transport services and planning.Keywords: Transportation planning, sustainable mobility, decision support system, discrete choice model, design problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117954 Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective
Authors: Swapnoneel Roy, Minhazur Rahman, Ashok Kumar Thakur
Abstract:
Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.Keywords: Sorting Primitives, Genome Rearrangements, Transpositions, Block Interchanges, Strip Exchanges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216153 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the CPU, RAM, and ROM memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.
Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35052 GridNtru: High Performance PKCS
Authors: Narasimham Challa, Jayaram Pradhan
Abstract:
Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.Keywords: Alchemi, GridNtru, Ntru, PKCS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691