Search results for: explicit algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4073

Search results for: explicit algorithm

2153 Method to Find a ε-Optimal Control of Stochastic Differential Equation Driven by a Brownian Motion

Authors: Francys Souza, Alberto Ohashi, Dorival Leao

Abstract:

We present a general solution for finding the ε-optimal controls for non-Markovian stochastic systems as stochastic differential equations driven by Brownian motion, which is a problem recognized as a difficult solution. The contribution appears in the development of mathematical tools to deal with modeling and control of non-Markovian systems, whose applicability in different areas is well known. The methodology used consists to discretize the problem through a random discretization. In this way, we transform an infinite dimensional problem in a finite dimensional, thereafter we use measurable selection arguments, to find a control on an explicit form for the discretized problem. Then, we prove the control found for the discretized problem is a ε-optimal control for the original problem. Our theory provides a concrete description of a rather general class, among the principals, we can highlight financial problems such as portfolio control, hedging, super-hedging, pairs-trading and others. Therefore, our main contribution is the development of a tool to explicitly the ε-optimal control for non-Markovian stochastic systems. The pathwise analysis was made through a random discretization jointly with measurable selection arguments, has provided us with a structure to transform an infinite dimensional problem into a finite dimensional. The theory is applied to stochastic control problems based on path-dependent stochastic differential equations, where both drift and diffusion components are controlled. We are able to explicitly show optimal control with our method.

Keywords: dynamic programming equation, optimal control, stochastic control, stochastic differential equation

Procedia PDF Downloads 194
2152 A Study on the Different Components of a Typical Back-Scattered Chipless RFID Tag Reflection

Authors: Fatemeh Babaeian, Nemai Chandra Karmakar

Abstract:

Chipless RFID system is a wireless system for tracking and identification which use passive tags for encoding data. The advantage of using chipless RFID tag is having a planar tag which is printable on different low-cost materials like paper and plastic. The printed tag can be attached to different items in the labelling level. Since the price of chipless RFID tag can be as low as a fraction of a cent, this technology has the potential to compete with the conventional optical barcode labels. However, due to the passive structure of the tag, data processing of the reflection signal is a crucial challenge. The captured reflected signal from a tag attached to an item consists of different components which are the reflection from the reader antenna, the reflection from the item, the tag structural mode RCS component and the antenna mode RCS of the tag. All these components are summed up in both time and frequency domains. The effect of reflection from the item and the structural mode RCS component can distort/saturate the frequency domain signal and cause difficulties in extracting the desired component which is the antenna mode RCS. Therefore, it is required to study the reflection of the tag in both time and frequency domains to have a better understanding of the nature of the captured chipless RFID signal. The other benefits of this study can be to find an optimised encoding technique in tag design level and to find the best processing algorithm the chipless RFID signal in decoding level. In this paper, the reflection from a typical backscattered chipless RFID tag with six resonances is analysed, and different components of the signal are separated in both time and frequency domains. Moreover, the time domain signal corresponding to each resonator of the tag is studied. The data for this processing was captured from simulation in CST Microwave Studio 2017. The outcome of this study is understanding different components of a measured signal in a chipless RFID system and a discovering a research gap which is a need to find an optimum detection algorithm for tag ID extraction.

Keywords: antenna mode RCS, chipless RFID tag, resonance, structural mode RCS

Procedia PDF Downloads 207
2151 A Radiomics Approach to Predict the Evolution of Prostate Imaging Reporting and Data System Score 3/5 Prostate Areas in Multiparametric Magnetic Resonance

Authors: Natascha C. D'Amico, Enzo Grossi, Giovanni Valbusa, Ala Malasevschi, Gianpiero Cardone, Sergio Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of areas classified PI-RADS (Prostate Imaging Reporting and Data System) 3/5, recognized in multiparametric prostate magnetic resonance with T2-weighted (T2w), diffusion and perfusion sequences with paramagnetic contrast. Methods and Materials: 24 cases undergoing multiparametric prostate MR and biopsy were admitted to this pilot study. Clinical outcome of the PI-RADS 3/5 was found through biopsy, finding 8 malignant tumours. The analysed images were acquired with a Philips achieva 1.5T machine with a CE- T2-weighted sequence in the axial plane. Semi-automatic tumour segmentation was carried out on MR images using 3DSlicer image analysis software. 45 shape-based, intensity-based and texture-based features were extracted and represented the input for preprocessing. An evolutionary algorithm (a TWIST system based on KNN algorithm) was used to subdivide the dataset into training and testing set and select features yielding the maximal amount of information. After this pre-processing 20 input variables were selected and different machine learning systems were used to develop a predictive model based on a training testing crossover procedure. Results: The best machine learning system (three-layers feed-forward neural network) obtained a global accuracy of 90% ( 80 % sensitivity and 100% specificity ) with a ROC of 0.82. Conclusion: Machine learning systems coupled with radiomics show a promising potential in distinguishing benign from malign tumours in PI-RADS 3/5 areas.

Keywords: machine learning, MR prostate, PI-Rads 3, radiomics

Procedia PDF Downloads 191
2150 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 70
2149 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing

Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais

Abstract:

Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.

Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query

Procedia PDF Downloads 206
2148 Towards a Methodology for the Assessment of Neighbourhood Design for Happiness

Authors: Tina Pujara

Abstract:

Urban and regional research in the new emerging inter-disciplinary field of happiness is seemingly limited. However, it is progressively being recognized that there is enormous potential for social and behavioral scientists to add a spatial dimension to it. In fact, the happiness of communities can be notably influenced by the design and maintenance of the neighborhoods they inhabit. The probable key reasons being that places can facilitate human social connections and relationships. While it is increasingly being acknowledged that some neighborhood designs appear better suited for social connectedness than others, the plausible reasons for places to deter these characteristics and perhaps their influence on happiness are outwardly unknown. In addition, an explicit step wise methodology to assess neighborhood designs for happiness (of their communities) is not known to exist. This paper is an attempt towards developing such a methodological framework. The paper presents the development of a methodological framework for assessing neighborhood designs for happiness, with a particular focus on the outdoor shared spaces in neighborhoods. The developed methodological framework of investigation follows a mixed method approach and draws upon four different sources of information. The framework proposes an empirical examination of the contribution of neighborhood factors, particularly outdoor shared spaces, to individual happiness. One of the main tools proposed for this empirical examination is Jan Gehl’s Public Space Public Life (PSPL) Survey. The developed framework, as presented in the paper, is a contribution towards the development of a consolidated methodology for assessing neighborhood designs for happiness, which can further serve as a unique tool to inform urban designers, architects and other decision makers.

Keywords: happiness, methodology, neighbourhood design, outdoor shared spaces

Procedia PDF Downloads 166
2147 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 161
2146 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead

Procedia PDF Downloads 125
2145 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 329
2144 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception

Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom

Abstract:

Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.

Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots

Procedia PDF Downloads 198
2143 An Investigation of Suppression in Mid-19th Century Japan: Case Study of the 1855 Catfish Prints as a Product of Censorship

Authors: Vasanth Narayanan

Abstract:

The mid-nineteenth century saw the Japanese elite and townsfolk alike undergo the now-infamous Ansei Edo earthquakes. The quakes decimated Japan in the final decades of the Tokugawa Era and, perhaps more consequentially, birthed a new genre of politically inspired artwork, the most notable of which are the namazu-e. This essay advocates an understanding of the 1855 Catfish Prints (namazu-e) that prioritizes the function of iconography and anthropomorphic deity in shaping the namazu-e into a wholly political experience that makes the censorship of the time part of its argument. The visual program is defined as the creation of a politically profitable experience, crafted through the union of explicit religion, highly masked commentary, and the impositions of censorship. The strategies by which the works are designed, in the face of censorship, to engage a less educated, pedestrian audience with its theme, including considerations of iconography, depictions of the working class, anthropomorphism, and the relationship between textual and visual elements, are discussed herein. The essay then takes up the question of the role of tense Japan–United States relations in fostering censorship and as a driver of the production of namazu-e. It is ultimately understood that the marriage of hefty censorship protocol, the explicitly religious medium, and inimical sentiment towards United States efforts at diplomacy renders the production of namazu-e an offspring of the censorship and deeply held frustrations of the time, cementing its status as a primitive form of peaceful protest against a seemingly apathetic government.

Keywords: Japan, Ansei Earthquake, Namazu, prints, censorship, religion

Procedia PDF Downloads 138
2142 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir

Authors: Ahmad Fahim Nasiry, Shigeo Honma

Abstract:

We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.

Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding

Procedia PDF Downloads 336
2141 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform

Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee

Abstract:

This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.

Keywords: Boid Algorithm, Crowd Simulation, Mobile Platform, Newtonian Laws, Virtual Heritage

Procedia PDF Downloads 280
2140 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm

Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra

Abstract:

With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.

Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction

Procedia PDF Downloads 130
2139 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 167
2138 Reimaging Archetype of Mosque: A Case Study on Contemporary Mosque Architecture in Bangladesh

Authors: Sabrina Rahman

Abstract:

The Mosque is Islam’s most symbolic structure, as well as the expression of collective identity. From the explicit words of our Prophet, 'The earth has been created for me as a masjid and a place of purity, and whatever man from my Ummah finds himself in need of prayer, let him pray' (anywhere)! it is obvious that a devout Muslim does not require a defined space or structure for divine worship since the whole earth is his prayer house. Yet we see that from time immemorial man throughout the Muslim world has painstakingly erected innumerable mosques. However, mosque design spans time, crosses boundaries, and expresses cultures. It is a cultural manifestation as much as one based on a regional building tradition or a certain interpretation of religion. The trend to express physical signs of religion is not new. Physical forms seem to convey symbolic messages. However, in recent times physical forms of mosque architecture are dominantly demising from mosque architecture projects in Bangladesh. Dome & minaret, the most prominent symbol of the mosque, is replacing by contextual and contemporary improvisation rather than subcontinental mosque architecture practice of early fellows. Thus the recent mosque projects of the last 15 years established the contemporary architectural realm in their design. Contextually, spiritual lighting, the serenity of space, tranquility of outdoor spaces, the texture of materials is widely establishing a new genre of Muslim prayer space. A case study based research will lead to specify its significant factors of modernism. Based on the findings, the paper presents evidence of recent projects as well as a guideline for the future image of contemporary Mosque architecture in Bangladesh.

Keywords: contemporary architecture, modernism, prayer space, symbolism

Procedia PDF Downloads 125
2137 The Design and Implementation of an Enhanced 2D Mesh Switch

Authors: Manel Langar, Riad Bourguiba, Jaouhar Mouine

Abstract:

In this paper, we propose the design and implementation of an enhanced wormhole virtual channel on chip router. It is a heart of a mesh NoC using the XY deterministic routing algorithm. It is characterized by its simple virtual channel allocation strategy which allows reducing area and complexity of connections without affecting the performance. We implemented our router on a Tezzaron process to validate its performances. This router is a basic element that will be used later to design a 3D mesh NoC.

Keywords: NoC, mesh, router, 3D NoC

Procedia PDF Downloads 570
2136 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine

Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski

Abstract:

Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: electric vehicle, power generator, range extender, Wankel engine

Procedia PDF Downloads 157
2135 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 145
2134 Higher Education for Knowledge and Technology Transfer in Egypt

Authors: M. A. Zaki Ewiss, S. Afifi

Abstract:

Nahda University (NUB) believes that internationalisation of higher educational is able to provide global society with an education that meets current needs and that can respond efficiently to contemporary demands and challenges, which are characterized by globalisation, interdependence, and multiculturalism. In this paper, we will discuss the the challenges of the Egyptian Higher Education system and the future vision to improve this system> In this report, the following issues will be considered: Increasing knowledge on the development of specialized programs of study at the university. Developing international cooperation programs, which focus on the development of the students and staff skills, and providing academic culture and learning opportunities. Increasing the opportunities for student mobility, and research projects for faculty members. Increased opportunities for staff, faculty and students to continue to learn foreign universities, and to benefit from scholarships in various disciplines. Taking the advantage of the educational experience and modern teaching methods; Providing the opportunities to study abroad without increasing the period of time required for graduation, and through greater integration in the curricula and programs; More cultural interaction through student exchanges.Improving and providing job opportunities for graduates through participation in the global labor market. This document sets out NUB strategy to move towards that vision. We are confident that greater explicit differentiation, greater freedom and greater collaboration are the keys to delivering the further improvement in quality we shall need to retain and strengthen our position as one of the world’s leading higher education systems.

Keywords: technology transfer higher education, knowledge transfer, internationalisation, mobility

Procedia PDF Downloads 440
2133 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 342
2132 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 732
2131 Understanding Factors that Affect the Prior Knowledge of Deaf and Hard of Hearing Students and their Relation to Reading Comprehension

Authors: Khalid Alasim

Abstract:

The reading comprehension levels of students who are deaf or hard of hearing (DHH) are low compared to those of their hearing peers. One possible reason for this low reading levels is related to the students’ prior knowledge. This study investigated the potential factors that might affected DHH students’ prior knowledge, including their degree of hearing loss, the presence or absence of family members with a hearing loss, and educational stage (elementary–middle school). The study also examined the contribution of prior knowledge in predicting DHH students’ reading comprehension levels, and investigated the differences in the students’ scores based on the type of questions, including text-explicit (TE), text-implicit (TI), and script-implicit (SI) questions. Thirty-one elementary and middle-school students completed a demographic form and assessment, and descriptive statistics and multiple and simple linear regressions were used to answer the research questions. The findings indicated that the independent variables—degree of hearing loss, presence or absence of family members with hearing loss, and educational stage—explained little of the variance in DHH students’ prior knowledge. Further, the results showed that the DHH students’ prior knowledge affected their reading comprehension. Finally, the result demonstrated that the participants were able to answer more of the TI questions correctly than the TE and SI questions. The study concluded that prior knowledge is important in these students’ reading comprehension, and it is also important for teachers and parents of DHH children to use effective ways to increase their students’ and children’s prior knowledge.

Keywords: reading comprehension, prior knowledge, metacognition, elementary, self-contained classrooms

Procedia PDF Downloads 108
2130 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid

Authors: Min Wang, Sergey Utev

Abstract:

The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.

Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial

Procedia PDF Downloads 143
2129 Temperature Distribution for Asphalt Concrete-Concrete Composite Pavement

Authors: Tetsya Sok, Seong Jae Hong, Young Kyu Kim, Seung Woo Lee

Abstract:

The temperature distribution for asphalt concrete (AC)-Concrete composite pavement is one of main influencing factor that affects to performance life of pavement. The temperature gradient in concrete slab underneath the AC layer results the critical curling stress and lead to causes de-bonding of AC-Concrete interface. These stresses, when enhanced by repetitive axial loadings, also contribute to the fatigue damage and eventual crack development within the slab. Moreover, the temperature change within concrete slab extremely causes the slab contracts and expands that significantly induces reflective cracking in AC layer. In this paper, the numerical prediction of pavement temperature was investigated using one-dimensional finite different method (FDM) in fully explicit scheme. The numerical predicted model provides a fundamental and clear understanding of heat energy balance including incoming and outgoing thermal energies in addition to dissipated heat in the system. By using the reliable meteorological data for daily air temperature, solar radiation, wind speech and variable pavement surface properties, the predicted pavement temperature profile was validated with the field measured data. Additionally, the effects of AC thickness and daily air temperature on the temperature profile in underlying concrete were also investigated. Based on obtained results, the numerical predicted temperature of AC-Concrete composite pavement using FDM provided a good accuracy compared to field measured data and thicker AC layer significantly insulates the temperature distribution in underlying concrete slab.

Keywords: asphalt concrete, finite different method (FDM), curling effect, heat transfer, solar radiation

Procedia PDF Downloads 273
2128 Leveraging Positive Psychology Practices to Elevate the Impact of Check-In, Check-Out (CICO) in Schools

Authors: Kimberli Breen

Abstract:

Background Check-In, Check-Out is noted as the most widely implemented evidence-based intervention for youth at-promise within schools. Over twenty years of peer-reviewed research demonstrates the powerful effects of this Positive Behavioral Interventions and Supports (PBIS) practice when implemented with fidelity. However, literature to date has not explicitly connected this intervention with Positive Psychology. Aims This session will illustrate the powerful role Positive Psychology and core elements of PERMA play in the worldwide success of this intervention and how more explicitly aligning Positive Behavioral Interventions and Supports (PBIS) practices with Positive Psychology might remove common barriers to current implementation. Method Students receiving the Check-In, Check-Out intervention experience a warm, positive greeting from a caring adult (CICO Coach) before entering their first class of the day. Teachers then provide high frequency positive feedback to the students at the end of each time block, or segment, of the day. An “optimistic close” to the day is then provided by the same CICO Coach at the end of the school day via the “check-out” process, where students assess the day’s accomplishments and goal-set for the next day. Results CICO clearly aligns with the Positive Psychology core elements of PERMA (Positive Emotion, Engagement, Relationships, Meaning and Accomplishments) and could be further strengthened through explicit integration. Conclusion The already powerful impact and reach of the Check-In, Check-Out intervention can be further enhanced and expanded through greater alignment with Positive Psychology elements and practices. Initiating this important alignment with CICO also offers promise for further integration of Positive Psychology and Positive Behavioral Interventions and Supports.

Keywords: positive pscyhology, check-In check-out, schools, alignment

Procedia PDF Downloads 71
2127 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 93
2126 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 337
2125 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 272
2124 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 145