Search results for: Akaike Information Criterion (AIC)
334 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes
Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani
Abstract:
In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888333 The DAQ Debugger for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 893332 ORank: An Ontology Based System for Ranking Documents
Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee
Abstract:
Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888331 A Remote Sensing Approach for Vulnerability and Environmental Change in Apodi Valley Region, Northeast Brazil
Authors: Mukesh Singh Boori, Venerando Eustáquio Amaro
Abstract:
The objective of this study was to improve our understanding of vulnerability and environmental change; it's causes basically show the intensity, its distribution and human-environment effect on the ecosystem in the Apodi Valley Region, This paper is identify, assess and classify vulnerability and environmental change in the Apodi valley region using a combined approach of landscape pattern and ecosystem sensitivity. Models were developed using the following five thematic layers: Geology, geomorphology, soil, vegetation and land use/cover, by means of a Geographical Information Systems (GIS)-based on hydro-geophysical parameters. In spite of the data problems and shortcomings, using ESRI-s ArcGIS 9.3 program, the vulnerability score, to classify, weight and combine a number of 15 separate land cover classes to create a single indicator provides a reliable measure of differences (6 classes) among regions and communities that are exposed to similar ranges of hazards. Indeed, the ongoing and active development of vulnerability concepts and methods have already produced some tools to help overcome common issues, such as acting in a context of high uncertainties, taking into account the dynamics and spatial scale of asocial-ecological system, or gathering viewpoints from different sciences to combine human and impact-based approaches. Based on this assessment, this paper proposes concrete perspectives and possibilities to benefit from existing commonalities in the construction and application of assessment tools.Keywords: Vulnerability, Land use/cover, Ecosystem, Remotesensing, GIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946330 An Empirical Study about RFID Acceptance- Focus on the Employees in Korea -
Authors: Mi Sook Lee
Abstract:
The number of the companies accepting RFID in Korea has been increased continuously due to the domestic development of information technology. The acceptance of RFID by companies in Korea enabled them to do business with many global enterprises in a much more efficient and effective way. According to a survey[33, p76], many companies in Korea have used RFID for inventory or distribution manages. But, the use of RFID in the companies in Korea is in the early stages and its potential value hasn-t fully been realized yet. At this time, it would be very important to investigate the factors that affect RFID acceptance. For this study, many previous studies were referenced and some RFID experts were interviewed. Through the pilot test, four factors were selected - Security Trust, Employee Knowledge, Partner Influence, Service Provider Trust - affecting RFID acceptance and an extended technology acceptance model(e-TAM) was presented with those factors. The proposed model was empirically tested using data collected from employees in companies or public enterprises. In order to analyze some relationships between exogenous variables and four variables in TAM, structural equation modeling(SEM) was developed and SPSS12.0 and AMOS 7.0 were used for analyses. The results are summarized as follows: 1) security trust perceived by employees positively influences on perceived usefulness and perceived ease of use; 2) employee-s knowledge on RFID positively influences on only perceived ease of use; 3) a partner-s influence for RFID acceptance positively influences on only perceived usefulness; 4) service provider trust very positively influences on perceived usefulness and perceived ease of use 5) the relationships between TAM variables are the same as the previous studies.Keywords: RFID, TAM, Security Trust, Employee Knowledge, Partner Influence, Service Provider Trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1808329 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.
Keywords: Thermal remote sensing, insolation model, land surface temperature, geothermal anomalies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025328 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.
Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718327 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere
Authors: Moustafa Osman Mohammed
Abstract:
This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.Keywords: Air dispersion model, landfill management, spatial analysis, environmental impact and risk assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558326 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain
Authors: Bita Payami-Shabestari, Dariush Eslami
Abstract:
The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682325 Web-Based Cognitive Writing Instruction (WeCWI): A Theoretical-and-Pedagogical e-Framework for Language Development
Authors: Boon Yih Mah
Abstract:
Web-based Cognitive Writing Instruction (WeCWI)’s contribution towards language development can be divided into linguistic and non-linguistic perspectives. In linguistic perspective, WeCWI focuses on the literacy and language discoveries, while the cognitive and psychological discoveries are the hubs in non-linguistic perspective. In linguistic perspective, WeCWI draws attention to free reading and enterprises, which are supported by the language acquisition theories. Besides, the adoption of process genre approach as a hybrid guided writing approach fosters literacy development. Literacy and language developments are interconnected in the communication process; hence, WeCWI encourages meaningful discussion based on the interactionist theory that involves input, negotiation, output, and interactional feedback. Rooted in the elearning interaction-based model, WeCWI promotes online discussion via synchronous and asynchronous communications, which allows interactions happened among the learners, instructor, and digital content. In non-linguistic perspective, WeCWI highlights on the contribution of reading, discussion, and writing towards cognitive development. Based on the inquiry models, learners’ critical thinking is fostered during information exploration process through interaction and questioning. Lastly, to lower writing anxiety, WeCWI develops the instructional tool with supportive features to facilitate the writing process. To bring a positive user experience to the learner, WeCWI aims to create the instructional tool with different interface designs based on two different types of perceptual learning style.
Keywords: WeCWI, literacy discovery, language discovery, cognitive discovery, psychological discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3232324 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.
Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199323 The Determination of Stress Experienced by Nursing Undergraduate Students during Their Education
Authors: Gülden Küçükakça, Şefika Dilek Güven, Rahşan Kolutek, Seçil Taylan
Abstract:
Objective: Nursing students face with stress factors affecting academic performance and quality of life as from first moments of their educational life. Stress causes health problems in students such as physical, psycho-social, and behavioral disorders and might damage formation of professional identity by decreasing efficiency of education. In addition to determination of stress experienced by nursing students during their education, it was aimed to help review theoretical and clinical education settings for bringing stress of nursing students into positive level and to raise awareness of educators concerning their own professional behaviors. Methods: The study was conducted with 315 students studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in the academic year of 2015-2016 and agreed to participate in the study. “Personal Information Form” prepared by the researchers upon the literature review and “Nursing Education Stress Scale (NESS)” were used in this study. Data were assessed with analysis of variance and correlation analysis. Results: Mean NESS Scale score of the nursing students was estimated to be 66.46±16.08 points. Conclusions: As a result of this study, stress level experienced by nursing undergraduate students during their education was determined to be high. In accordance with this result, it can be recommended to determine sources of stress experienced by nursing undergraduate students during their education and to develop approaches to eliminate these stress sources.Keywords: Stress, nursing education, nursing student, nursing education stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2089322 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679321 Investigating the Effectiveness of a 3D Printed Composite Mold
Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg
Abstract:
In composite manufacturing, the fabrication of tooling and tooling maintenance contributes to a large portion of the total cost. However, as the applications of composite materials continue to increase, there is also a growing demand for more tooling. The demand for more tooling places heavy emphasis on the industry’s ability to fabricate high quality tools while maintaining the tool’s cost effectiveness. One of the popular techniques of tool fabrication currently being developed utilizes additive manufacturing technology known as 3D printing. The popularity of 3D printing is due to 3D printing’s ability to maintain low material waste, low cost, and quick fabrication time. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite mold. A steel valve cover from an aircraft reciprocating engine was modeled utilizing 3D scanning and computer-aided design (CAD) to create a 3D printed composite mold. The mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The carbon fiber valve covers were evaluated for dimensional accuracy and quality while the 3D printed composite mold was evaluated for durability and dimensional stability. The data collected from this study provided valuable information in the understanding of 3D printed composite molds, potential improvements for the molds, and considerations for future tooling design.Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 909320 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds
Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid
Abstract:
A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.Keywords: Dam-break flows, deformable beds, finite element method, finite volume method, linear elasticity, Shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913319 Seismic Vulnerability Assessment of Masonry Buildings in Seismic Prone Regions: The Case of Annaba City, Algeria
Authors: Allaeddine Athmani, Abdelhacine Gouasmia, Tiago Ferreira, Romeu Vicente
Abstract:
Seismic vulnerability assessment of masonry buildings is a fundamental issue even for moderate to low seismic hazard regions. This fact is even more important when dealing with old structures such as those located in Annaba city (Algeria), which the majority of dates back to the French colonial era from 1830. This category of buildings is in high risk due to their highly degradation state, heterogeneous materials and intrusive modifications to structural and non-structural elements. Furthermore, they are usually shelter a dense population, which is exposed to such risk. In order to undertake a suitable seismic risk mitigation strategies and reinforcement process for such structures, it is essential to estimate their seismic resistance capacity at a large scale. In this sense, two seismic vulnerability index methods and damage estimation have been adapted and applied to a pilot-scale building area located in the moderate seismic hazard region of Annaba city: The first one based on the EMS-98 building typologies, and the second one derived from the Italian GNDT approach. To perform this task, the authors took the advantage of an existing data survey previously performed for other purposes. The results obtained from the application of the two methods were integrated and compared using a geographic information system tool (GIS), with the ultimate goal of supporting the city council of Annaba for the implementation of risk mitigation and emergency planning strategies.Keywords: Annaba city, EMS98 concept, GNDT method, old city center, seismic vulnerability index, unreinforced masonry buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634318 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.
Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913317 Educators’ Adherence to Learning Theories and Their Perceptions on the Advantages and Disadvantages of e-Learning
Authors: Samson T. Obafemi, Seraphin D. Eyono Obono
Abstract:
Information and Communication Technologies (ICTs) are pervasive nowadays, including in education where they are expected to improve the performance of learners. However, the hope placed in ICTs to find viable solutions to the problem of poor academic performance in schools in the developing world has not yet yielded the expected benefits. This problem serves as a motivation to this study whose aim is to examine the perceptions of educators on the advantages and disadvantages of e-learning. This aim will be subdivided into two types of research objectives. Objectives on the identification and design of theories and models will be achieved using content analysis and literature review. However, the objective on the empirical testing of such theories and models will be achieved through the survey of educators from different schools in the Pinetown District of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyse the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after assessing the validity and the reliability of the data. The main hypothesis driving this study is that there is a relationship between the demographics of educators’ and their adherence to learning theories on one side, and their perceptions on the advantages and disadvantages of e-learning on the other side, as argued by existing research; but this research views these learning theories under three perspectives: educators’ adherence to self-regulated learning, to constructivism, and to progressivism. This hypothesis was fully confirmed by the empirical study except for the demographic factor where teachers’ level of education was found to be the only demographic factor affecting the perceptions of educators on the advantages and disadvantages of e-learning.
Keywords: Academic performance, e-learning, Learning theories, Teaching and Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2634316 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures
Authors: Filippo Ranalli, Forest Flager, Martin Fischer
Abstract:
This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.
Keywords: Cost-based structural optimization, cost-based topology and sizing optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423315 Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques
Authors: Mrinmoy Dhara, Vivek K. Sengar, Shovan L. Chattoraj, Soumiya Bhattacharjee
Abstract:
Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies.
Keywords: Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486314 Bridging the Mental Gap between Convolution Approach and Compartmental Modeling in Functional Imaging: Typical Embedding of an Open Two-Compartment Model into the Systems Theory Approach of Indicator Dilution Theory
Authors: Gesine Hellwig
Abstract:
Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.
Keywords: Functional imaging, Tracer kinetic modeling, LTIsystem, Indicator dilution theory / convolution approach, Two-Compartment model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418313 Invasion of Pectinatella magnifica in Freshwater Resources of the Czech Republic
Authors: J. Pazourek, K. Šmejkal, P. Kollár, J. Rajchard, J. Šinko, Z. Balounová, E. Vlková, H. Salmonová
Abstract:
Pectinatella magnifica (Leidy, 1851) is an invasive freshwater animal that lives in colonies. A colony of Pectinatella magnifica (a gelatinous blob) can be up to several feet in diameter large and under favorable conditions it exhibits an extreme growth rate. Recently European countries around rivers of Elbe, Oder, Danube, Rhine and Vltava have confirmed invasion of Pectinatella magnifica, including freshwater reservoirs in South Bohemia (Czech Republic). Our project (Czech Science Foundation, GAČR P503/12/0337) is focused onto biology and chemistry of Pectinatella magnifica. We monitor the organism occurrence in selected South Bohemia ponds and sandpits during the last years, collecting information about physical properties of surrounding water, and sampling the colonies for various analyses (classification, maps of secondary metabolites, toxicity tests). Because the gelatinous matrix is during the colony lifetime also a host for algae, bacteria and cyanobacteria (co-habitants), in this contribution, we also applied a high performance liquid chromatography (HPLC) method for determination of potentially present cyanobacterial toxins (microcystin-LR, microcystin-RR, nodularin). Results from the last 3-year monitoring show that these toxins are under limit of detection (LOD), so that they do not represent a danger yet. The final goal of our study is to assess toxicity risks related to fresh water resources invaded by Pectinatella magnifica, and to understand the process of invasion, which can enable to control it.
Keywords: Cyanobacteria, freshwater resources, Pectinatella magnifica invasion, toxicity monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876312 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management
Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal
Abstract:
Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.Keywords: Sustainable supply chain management, supplier selection, MCDM tools, AHP analysis, TOPSIS method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3490311 Isolation and Screening of Laccase Producing Basidiomycetes via Submerged Fermentations
Authors: Mun Yee Chan, Sin Ming Goh, Lisa Gaik Ai Ong
Abstract:
Approximately 10,000 different types of dyes and pigments are being used in various industrial applications yearly, which include the textile and printing industries. However, these dyes are difficult to degrade naturally once they enter the aquatic system. Their high persistency in natural environment poses a potential health hazard to all form of life. Hence, there is a need for alternative dye removal strategy in the environment via bioremediation. In this study, fungi laccase is investigated via commercial agar dyes plates and submerged fermentation to explore the application of fungi laccase in textile dye wastewater treatment. Two locally isolated basidiomycetes were screened for laccase activity using media added with commercial dyes such as 2, 2-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid (ABTS), guaiacol and Remazol Brillant Blue R (RBBR). Isolate TBB3 (1.70±0.06) and EL2 (1.78±0.08) gave the highest results for ABTS plates with the appearance of greenish halo on around the isolates. Submerged fermentation performed on Isolate TBB3 with the productivity 3.9067 U/ml/day, whereas the laccase activity for Isolate EL2 was much lower (0.2097 U/ml/day). As isolate TBB3 showed higher laccase production, it was subjected to molecular characterization by DNA isolation, PCR amplification and sequencing of ITS region of nuclear ribosomal DNA. After being compared with other sequences in National Center for Biotechnology Information (NCBI database), isolate TBB3 is probably from species Trametes hirsutei. Further research work can be performed on this isolate by upscale the production of laccase in order to meet the demands of the requirement for higher enzyme titer for the bioremediation of textile dyes.Keywords: Bioremediation, dyes, fermentation, laccase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185310 A BIM-Based Approach to Assess COVID-19 Risk Management Regarding Indoor Air Ventilation and Pedestrian Dynamics
Authors: T. Delval, C. Sauvage, Q. Jullien, R. Viano, T. Diallo, B. Collignan, G. Picinbono
Abstract:
In the context of the international spread of COVID-19, the Centre Scientifique et Technique du Bâtiment (CSTB) has led a joint research with the French government authorities Hauts-de-Seine department, to analyse the risk in school spaces according to their configuration, ventilation system and spatial segmentation strategy. This paper describes the main results of this joint research. A multidisciplinary team involving experts in indoor air quality/ventilation, pedestrian movements and IT domains was established to develop a COVID risk analysis tool based on Building Information Model. The work started with specific analysis on two pilot schools in order to provide for the local administration specifications to minimize the spread of the virus. Different recommendations were published to optimize/validate the use of ventilation systems and the strategy of student occupancy and student flow segmentation within the building. This COVID expertise has been digitized in order to manage a quick risk analysis on the entire building that could be used by the public administration through an easy user interface implemented in a free BIM Management software. One of the most interesting results is to enable a dynamic comparison of different ventilation system scenarios and space occupation strategy inside the BIM model. This concurrent engineering approach provides users with the optimal solution according to both ventilation and pedestrian flow expertise.
Keywords: BIM, knowledge management, system expert, risk management, indoor ventilation, pedestrian movement, integrated design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 761309 Residential and Care Model for Elderly People Based on “Internet Plus”
Authors: Haoyi Sheng
Abstract:
China's aging tendency is becoming increasingly severe, which leads to the embarrassing situation of "getting old before getting wealthy". The traditional pension model does not comply with the need of today. Relying on "Internet Plus", it can efficiently integrate information and resources and meet the personalized needs of elderly care. It can reduce the operating cost of community elderly care facilities and lay a technical foundation for providing better services for the elderly. The key for providing help for the elderly in the future is to effectively integrate technology, make good use of technology, and improve the efficiency of elderly care services. The effective integration of traditional home care, community care, intelligent elderly care equipment and medical resources to create the "Internet Plus" community intelligent pension service mode has become the future development trend of aging care. The research method of this paper is to collect literature and conduct theoretical research on community pension firstly. Secondly, the combination of suitable aging design and "Internet Plus" is elaborated through research. Finally, this paper states the current level of intelligent technology in old-age care and looks into the future by understanding multiple levels of "Internet Plus". The development of community intelligent pension mode and content under "Internet Plus" has enormous development potential. In addition to the characteristics and functions of ordinary houses, residential design of endowment housing has higher requirements for comfort and personalization, and the people-oriented is the principle of design.
Keywords: Ageing tendency, "Internet plus", community intelligent elderly care, elderly care service model, technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745308 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2480307 Effect of L-Dopa on Performance and Carcass Characteristics in Broiler Chickens
Authors: B. R. O. Omidiwura, A. F. Agboola, E. A. Iyayi
Abstract:
Pure form of L-Dopa is used to enhance muscular development, fat breakdown and suppress Parkinson disease in humans. However, the L-Dopa in mucuna seed, when present with other antinutritional factors, causes nutritional disorders in monogastric animals. Information on the utilisation of pure L-Dopa in monogastric animals is scanty. Therefore, effect of L-Dopa on growth performance and carcass characteristics in broiler chickens was investigated. Two hundred and forty one-day-old chicks were allotted to six treatments, which consisted of a positive control (PC) with standard energy (3100Kcal/Kg) and negative control (NC) with high energy (3500Kcal/Kg). The rest 4 diets were NC+0.1, NC+0.2, NC+0.3 and NC+0.4% L-Dopa, respectively. All treatments had 4 replicates in a completely randomized design. Body weight gain, final weight, feed intake, dressed weight and carcass characteristics were determined. Body weight gain and final weight of birds fed PC were 1791.0 and 1830.0g, NC+0.1% L-Dopa were 1827.7 and 1866.7g and NC+0.2% L-Dopa were 1871.9 and 1910.9g respectively, and the feed intake of PC (3231.5g), were better than other treatments. The dressed weight at 1375.0g and 1357.1g of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than other treatments. Also, the thigh (202.5g and 194.9g) and the breast meat (413.8g and 410.8g) of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than birds fed other treatments. The drum stick of birds fed NC+0.1% L-Dopa (220.5g) was observed to be better than birds on other diets. Meat to bone ratio and relative organ weights were not affected across treatments. L-Dopa extract, at levels tested, had no detrimental effect on broilers, rather better bird performance and carcass characteristics were observed especially at 0.1% and 0.2% L-Dopa inclusion rates. Therefore, 0.2% inclusion is recommended in diets of broiler chickens for improved performance and carcass characteristics.Keywords: Broilers, Carcass characteristics, L-Dopa, performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451306 Spatial Query Localization Method in Limited Reference Point Environment
Authors: Victor Krebss
Abstract:
Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535305 Stress Relaxation of Date at Different Temperature and Moisture Content of Product: A New Approach
Authors: D. Zare, M. Alirezaei, S.M. Nassiri
Abstract:
Iran is one of the greatest producers of date in the world. However due to lack of information about its viscoelastic properties, much of the production downgraded during harvesting and postharvesting processes. In this study the effect of temperature and moisture content of product were investigated on stress relaxation characteristics. Therefore, the freshly harvested date (kabkab) at tamar stage were put in controlled environment chamber to obtain different temperature levels (25, 35, 45, and 55 0C) and moisture contents (8.5, 8.7, 9.2, 15.3, 20, 32.2 %d.b.). A texture analyzer TAXT2 (Stable Microsystems, UK) was used to apply uniaxial compression tests. A chamber capable to control temperature was designed and fabricated around the plunger of texture analyzer to control the temperature during the experiment. As a new approach a CCD camera (A4tech, 30 fps) was mounted on a cylindrical glass probe to scan and record contact area between date and disk. Afterwards, pictures were analyzed using image processing toolbox of Matlab software. Individual date fruit was uniaxially compressed at speed of 1 mm/s. The constant strain of 30% of thickness of date was applied to the horizontally oriented fruit. To select a suitable model for describing stress relaxation of date, experimental data were fitted with three famous stress relaxation models including the generalized Maxwell, Nussinovitch, and Pelege. The constant in mentioned model were determined and correlated with temperature and moisture content of product using non-linear regression analysis. It was found that Generalized Maxwell and Nussinovitch models appropriately describe viscoelastic characteristics of date fruits as compared to Peleg mode.Keywords: Stress relaxation, Viscoelastic properties, Date, Texture analyzer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912