Search results for: Multimedia Database Architecture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1706

Search results for: Multimedia Database Architecture

146 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7343
145 Secure Protocol for Short Message Service

Authors: Shubat S. Ahmeda, Ashraf M. Ali Edwila

Abstract:

Short Message Service (SMS) has grown in popularity over the years and it has become a common way of communication, it is a service provided through General System for Mobile Communications (GSM) that allows users to send text messages to others. SMS is usually used to transport unclassified information, but with the rise of mobile commerce it has become a popular tool for transmitting sensitive information between the business and its clients. By default SMS does not guarantee confidentiality and integrity to the message content. In the mobile communication systems, security (encryption) offered by the network operator only applies on the wireless link. Data delivered through the mobile core network may not be protected. Existing end-to-end security mechanisms are provided at application level and typically based on public key cryptosystem. The main concern in a public-key setting is the authenticity of the public key; this issue can be resolved by identity-based (IDbased) cryptography where the public key of a user can be derived from public information that uniquely identifies the user. This paper presents an encryption mechanism based on the IDbased scheme using Elliptic curves to provide end-to-end security for SMS. This mechanism has been implemented over the standard SMS network architecture and the encryption overhead has been estimated and compared with RSA scheme. This study indicates that the ID-based mechanism has advantages over the RSA mechanism in key distribution and scalability of increasing security level for mobile service.

Keywords: Elliptic Curve Cryptography (ECC), End-to-end Security, Identity-based Cryptography, Public Key, RSA, SMS Protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
144 A Real Time Development Study for Automated Centralized Remote Monitoring System at Royal Belum Forest

Authors: Amri Yusoff, Shahrizuan Shafiril, Ashardi Abas, Norma Che Yusoff

Abstract:

Nowadays, illegal logging has been causing many effects including flash flood, avalanche, global warming, and etc. The purpose of this study was to maintain the earth ecosystem by keeping and regulate Malaysia’s treasurable rainforest by utilizing a new technology that will assist in real-time alert and give faster response to the authority to act on these illegal activities. The methodology of this research consisted of design stages that have been conducted as well as the system model and system architecture of the prototype in addition to the proposed hardware and software that have been mainly used such as microcontroller, sensor with the implementation of GSM, and GPS integrated system. This prototype was deployed at Royal Belum forest in December 2014 for phase 1 and April 2015 for phase 2 at 21 pinpoint locations. The findings of this research were the capture of data in real-time such as temperature, humidity, gaseous, fire, and rain detection which indicate the current natural state and habitat in the forest. Besides, this device location can be detected via GPS of its current location and then transmitted by SMS via GSM system. All of its readings were sent in real-time for further analysis. The data that were compared to meteorological department showed that the precision of this device was about 95% and these findings proved that the system is acceptable and suitable to be used in the field.

Keywords: Remote monitoring system, forest data, GSM, GPS, wireless sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
143 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks

Authors: Khalid Ali, Manar Jammal

Abstract:

In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.

Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 472
142 Performance Analysis of Evolutionary ANN for Output Prediction of a Grid-Connected Photovoltaic System

Authors: S.I Sulaiman, T.K Abdul Rahman, I. Musirin, S. Shaari

Abstract:

This paper presents performance analysis of the Evolutionary Programming-Artificial Neural Network (EPANN) based technique to optimize the architecture and training parameters of a one-hidden layer feedforward ANN model for the prediction of energy output from a grid connected photovoltaic system. The ANN utilizes solar radiation and ambient temperature as its inputs while the output is the total watt-hour energy produced from the grid-connected PV system. EP is used to optimize the regression performance of the ANN model by determining the optimum values for the number of nodes in the hidden layer as well as the optimal momentum rate and learning rate for the training. The EPANN model is tested using two types of transfer function for the hidden layer, namely the tangent sigmoid and logarithmic sigmoid. The best transfer function, neural topology and learning parameters were selected based on the highest regression performance obtained during the ANN training and testing process. It is observed that the best transfer function configuration for the prediction model is [logarithmic sigmoid, purely linear].

Keywords: Artificial neural network (ANN), Correlation coefficient (R), Evolutionary programming-ANN (EPANN), Photovoltaic (PV), logarithmic sigmoid and tangent sigmoid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
141 The Relationships between Market Orientation and Competitiveness of Companies in Banking Sector

Authors: P. Jangl, M. Mikuláštík

Abstract:

The objective of the paper is to measure and compare market orientation of Swiss and Czech banks, as well as examine statistically the degree of influence it has on competitiveness of the institutions. The analysis of market orientation is based on the collecting, analysis and correct interpretation of the data. Descriptive analysis of market orientation describe current situation. Research of relation of competitiveness and market orientation in the sector of big international banks is suggested with the expectation of existence of a strong relationship. Partially, the work served as reconfirmation of suitability of classic methodologies to measurement of banks’ market orientation.

Two types of data were gathered. Firstly, by measuring subjectively perceived market orientation of a company and secondly, by quantifying its competitiveness. All data were collected from a sample of small, mid-sized and large banks. We used numerical secondary character data from the international statistical financial Bureau Van Dijk’s BANKSCOPE database.

 Statistical analysis led to the following results. Assuming classical market orientation measures to be scientifically justified, Czech banks are statistically less market-oriented than Swiss banks. Secondly, among small Swiss banks, which are not broadly internationally active, small relationship exist between market orientation measures and market share based competitiveness measures. Thirdly, among all Swiss banks, a strong relationship exists between market orientation measures and market share based competitiveness measures. Above results imply existence of a strong relation of this measure in sector of big international banks. A strong statistical relationship has been proven to exist between market orientation measures and equity/total assets ratio in Switzerland.

Keywords: Market Orientation, Competitiveness, Marketing Strategy, Measurement of Market Orientation, Relation between Market Orientation and Competitiveness, Banking Sector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
140 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model

Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi

Abstract:

Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).

Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524
139 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.

This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.

Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.

In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.

The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
138 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the CPU, RAM, and ROM memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 302
137 An Agent Oriented Approach to Operational Profile Management

Authors: Sunitha Ramanujam, Hany El Yamany, Miriam A. M. Capretz

Abstract:

Software reliability, defined as the probability of a software system or application functioning without failure or errors over a defined period of time, has been an important area of research for over three decades. Several research efforts aimed at developing models to improve reliability are currently underway. One of the most popular approaches to software reliability adopted by some of these research efforts involves the use of operational profiles to predict how software applications will be used. Operational profiles are a quantification of usage patterns for a software application. The research presented in this paper investigates an innovative multiagent framework for automatic creation and management of operational profiles for generic distributed systems after their release into the market. The architecture of the proposed Operational Profile MAS (Multi-Agent System) is presented along with detailed descriptions of the various models arrived at following the analysis and design phases of the proposed system. The operational profile in this paper is extended to comprise seven different profiles. Further, the criticality of operations is defined using a new composed metrics in order to organize the testing process as well as to decrease the time and cost involved in this process. A prototype implementation of the proposed MAS is included as proof-of-concept and the framework is considered as a step towards making distributed systems intelligent and self-managing.

Keywords: Software reliability, Software testing, Metrics, Distributed systems, Multi-agent systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
136 Design of an Intelligent Location Identification Scheme Based On LANDMARC and BPNs

Authors: S. Chaisit, H.Y. Kung, N.T. Phuong

Abstract:

Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.

Keywords: BPNs, indoor location, location estimation, intelligent location identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
135 Director Compensation, CEO Duality, State Ownership, and Firm Performance in China: Proof from Panel Data of Publicly Listed Enterprises from 1999 to 2020

Authors: Wanda Luen-Wun Siu, Xiaowen Zhang

Abstract:

This paper offered the primary methodical proof on how director remuneration related to enterprise earnings in listed firms in China in light of most evidence focusing on cross-sectional data or data in a short span of time. Using full economic and business panel data on China’s publicly listed enterprise from 1999 to 2020 over two decades in the China Stock Market & Accounting Research database, we found statistically significant positive associations between director pay and firm performance in privately owned firms over this period, supporting the agency theory. In contrast, among the state-owned enterprises, there was a reverse relation between director compensation and firm financial performance, contributing to the existing literature. But the results also revealed that state-owned enterprises financially performed as well as private enterprises. Such findings suggested that state ownership might line up officials’ career incentives with party prime concern rather than pecuniary incentives. Also, CEO duality enhanced firm performance. As such, allegiance to the party and possible advancement to an upper-level political position would motivate company directors in state-owned enterprises. On the other hand, directors in privately owned enterprises might be motivated by monetary incentives. In addition, a statistical regression model was proposed and tested to get the results of the performance of state-owned enterprises. Finally, some suggestions were made about how to improve the institutional management of government-owned corporations in China.

Keywords: China’s listed Firm, director compensation, CEO duality, firm performance, panel analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440
134 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
133 Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

Authors: G. Ramana Murthy, C. Senthilpari, P. Velrajkumar, Lim Tien Sze

Abstract:

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

Keywords: CSA Full Adder, Delay unit, IIR filter, Low-Power, PDP, Parametric Analysis, Propagation Delay, Throughput, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3778
132 Management Software for the Elaboration of an Electronic File in the Pharmaceutical Industry Following Mexican Regulations

Authors: M. Peña Aguilar Juan, Ríos Hernández Ezequiel, R. Valencia Luis

Abstract:

For certification, certain goods of public interest, such as medicines and food, it is required the preparation and delivery of a dossier. For its elaboration, legal and administrative knowledge must be taken, as well as organization of the documents of the process, and an order that allows the file verification. Therefore, a virtual platform was developed to support the process of management and elaboration of the dossier, providing accessibility to the information and interfaces that allow the user to know the status of projects. The development of dossier system on the cloud allows the inclusion of the technical requirements for the software management, including the validation and the manufacturing in the field industry. The platform guides and facilitates the dossier elaboration (report, file or history), considering Mexican legislation and regulations, it also has auxiliary tools for its management. This technological alternative provides organization support for documents and accessibility to the information required to specify the successful development of a dossier. The platform divides into the following modules: System control, catalog, dossier and enterprise management. The modules are designed per the structure required in a dossier in those areas. However, the structure allows for flexibility, as its goal is to become a tool that facilitates and does not obstruct processes. The architecture and development of the software allows flexibility for future work expansion to other fields, this would imply feeding the system with new regulations.

Keywords: Electronic dossier, technologies for management, web software, dossier elaboration, pharmaceutical industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1176
131 Ontology of Collaborative Supply Chain for Quality Management

Authors: Jiaqi Yan, Sherry Sun, Huaiqing Wang, Zhongsheng Hua

Abstract:

In the highly competitive and rapidly changing global marketplace, independent organizations and enterprises often come together and form a temporary alignment of virtual enterprise in a supply chain to better provide products or service. As firms adopt the systems approach implicit in supply chain management, they must manage the quality from both internal process control and external control of supplier quality and customer requirements. How to incorporate quality management of upstream and downstream supply chain partners into their own quality management system has recently received a great deal of attention from both academic and practice. This paper investigate the collaborative feature and the entities- relationship in a supply chain, and presents an ontology of collaborative supply chain from an approach of aligning service-oriented framework with service-dominant logic. This perspective facilitates the segregation of material flow management from manufacturing capability management, which provides a foundation for the coordination and integration of the business process to measure, analyze, and continually improve the quality of products, services, and process. Further, this approach characterizes the different interests of supply chain partners, providing an innovative approach to analyze the collaborative features of supply chain. Furthermore, this ontology is the foundation to develop quality management system which internalizes the quality management in upstream and downstream supply chain partners and manages the quality in supply chain systematically.

Keywords: Ontology, supply chain quality management, service-oriented architecture, service-dominant logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823
130 Investigation of the Space in Response to the Conditions Caused by the Pandemics and Presenting Five-Scale Design Guidelines to Adapt and Prepare to Face the Pandemics

Authors: Sara Ramezanzadeh, Nashid Nabian

Abstract:

Historically, pandemics in different periods have caused compulsory changes in human life. In the case of COVID-19, according to the limitations and established care instructions, spatial alignment with the conditions is important. Following the outbreak of COVID-19, the question raised in this study is how to do spatial design in five scales, namely object, space, architecture, city, and infrastructure, in response to the consequences created in the realms under study. From the beginning of the pandemic until now, some changes in the spatial realm have been created spontaneously or by space users. These transformations have been mostly applied in modifiable parts such as furniture arrangement, especially in work-related spaces. To implement other comprehensive requirements, flexibility and adaptation of space design to the conditions resulting from the pandemics are needed during and after the outbreak. Studying the effects of pandemics from the past to the present, this research covers eight major realms, including three categories of ramifications, solutions, and paradigm shifts, and analytical conclusions about the solutions that have been created in response to them. Finally, by the consideration of epidemiology as a modern discipline influencing the design, spatial solutions in the five scales mentioned (in response to the effects of the eight realms for spatial adaptation in the face of pandemics and their following conditions) are presented as a series of guidelines. Due to the unpredictability of possible pandemics in the future, the possibility of changing and updating the provided guidelines is considered.

Keywords: Pandemics, COVID-19, spatial design, ramifications, paradigm shifts, guidelines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127
129 A Software-Supported Methodology for Designing General-Purpose Interconnection Networks for Reconfigurable Architectures

Authors: Kostas Siozios, Dimitrios Soudris, Antonios Thanailakis

Abstract:

Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.

Keywords: Design Methodology, FPGA, Interconnection, Low-Energy, High-Performance, CAD tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
128 Full-genomic Network Inference for Non-model organisms: A Case Study for the Fungal Pathogen Candida albicans

Authors: Jörg Linde, Ekaterina Buyko, Robert Altwasser, Udo Hahn, Reinhard Guthke

Abstract:

Reverse engineering of full-genomic interaction networks based on compendia of expression data has been successfully applied for a number of model organisms. This study adapts these approaches for an important non-model organism: The major human fungal pathogen Candida albicans. During the infection process, the pathogen can adapt to a wide range of environmental niches and reversibly changes its growth form. Given the importance of these processes, it is important to know how they are regulated. This study presents a reverse engineering strategy able to infer fullgenomic interaction networks for C. albicans based on a linear regression, utilizing the sparseness criterion (LASSO). To overcome the limited amount of expression data and small number of known interactions, we utilize different prior-knowledge sources guiding the network inference to a knowledge driven solution. Since, no database of known interactions for C. albicans exists, we use a textmining system which utilizes full-text research papers to identify known regulatory interactions. By comparing with these known regulatory interactions, we find an optimal value for global modelling parameters weighting the influence of the sparseness criterion and the prior-knowledge. Furthermore, we show that soft integration of prior-knowledge additionally improves the performance. Finally, we compare the performance of our approach to state of the art network inference approaches.

Keywords: Pathogen, network inference, text-mining, Candida albicans, LASSO, mutual information, reverse engineering, linear regression, modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
127 Nuclear Medical Image Treatment System Based On FPGA in Real Time

Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah

Abstract:

We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)

Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
126 Pipelined Control-Path Effects on Area and Performance of a Wormhole-Switched Network-on-Chip

Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner

Abstract:

This paper presents design trade-off and performance impacts of the amount of pipeline phase of control path signals in a wormhole-switched network-on-chip (NoC). The numbers of the pipeline phase of the control path vary between two- and one-cycle pipeline phase. The control paths consist of the routing request paths for output selection and the arbitration paths for input selection. Data communications between on-chip routers are implemented synchronously and for quality of service, the inter-router data transports are controlled by using a link-level congestion control to avoid lose of data because of an overflow. The trade-off between the area (logic cell area) and the performance (bandwidth gain) of two proposed NoC router microarchitectures are presented in this paper. The performance evaluation is made by using a traffic scenario with different number of workloads under 2D mesh NoC topology using a static routing algorithm. By using a 130-nm CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz, resulting in a high speed network link and high router bandwidth capacity of about 320 Gbit/s. Based on our experiments, the amount of control path pipeline stages gives more significant impact on the NoC performance than the impact on the logic area of the NoC router.

Keywords: Network-on-Chip, Synchronous Parallel Pipeline, Router Architecture, Wormhole Switching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
125 Earth Station Neural Network Control Methodology and Simulation

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.

Keywords: Satellite, neural network, MATLAB, power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1847
124 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication

Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.

Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1023
123 Physico-Chemical Environment of Coastal Areas in the Vicinity of Lbod And Tidal Link Drain in Sindh, Pakistan after Cyclone 2a

Authors: Salam Khalid Al-Agha, Inamullah Bhatti, Hossam Adel Zaqoot, Shaukat Hayat Khan, Abdul Khalique Ansari

Abstract:

This paper presents the results of preliminary assessment of water quality along the coastal areas in the vicinity of Left Bank Outfall Drainage (LBOD) and Tidal Link Drain (TLD) in Sindh province after the cyclone 2A occurred in 1999. The water samples were collected from various RDs of Tidal Link Drain and lakes during September 2001 to April 2002 and were analysed for salinity, nitrite, phosphate, ammonia, silicate and suspended material in water. The results of the study showed considerable variations in water quality depending upon the location along the coast in the vicinity of LBOD and RDs. The salinity ranged between 4.39–65.25 ppt in Tidal Link Drain samples whereas 2.4–38.05 ppt in samples collected from lakes. The values of suspended material at various RDs of Tidal Link Drain ranged between 56.6–2134 ppm and at the lakes between 68–297 ppm. The data of continuous monitoring at RD–93 showed the range of PO4 (8.6–25.2 μg/l), SiO3 (554.96–1462 μg/l), NO2 (0.557.2–25.2 μg/l) and NH3 (9.38–23.62 μg/l). The concentration of nutrients in water samples collected from different RDs was found in the range of PO4 (10.85 to 11.47 μg/l), SiO3 (1624 to 2635.08 μg/l), NO2 (20.38 to 44.8 μg/l) and NH3 (24.08 to 26.6 μg/l). Sindh coastal areas which situated at the north-western boundary the Arabian Sea are highly vulnerable to flood damages due to flash floods during SW monsoon or impact of sea level rise and storm surges coupled with cyclones passing through Arabian Sea along Pakistan coast. It is hoped that the obtained data in this study would act as a database for future investigations and monitoring of LBOD and Tidal Link Drain coastal waters.

Keywords: Tidal Link Drain, Salinity, Nutrients, Nitrite salts, Coastal areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
122 Association of Smoking with Chest Radiographic and Lung Function Findings in Retired Bauxite Mining Workers

Authors: L. R. Ferreira, R. C. G. Bianchi, L. C.R. Ferreira, C. M. Galhardi, E. P. Baciuk, L. H. Oliveira

Abstract:

Inhalation hazards are associated with potentially injurious exposure and increased risk for lung diseases, within the bauxite mining industry, especially for the smelter workers. Smoking is related to decreased lung function and leads to chronic lung diseases. This study had the objective to evaluate whether smoking is related to functional and radiographic respiratory changes in retired bauxite mining workers. Methods: This was a retrospective and cross-sectional study involving the analysis of database information of 140 retired bauxite mining workers from Poços de Caldas-MG evaluated at Worker’s Health Reference Center and at the Social Security Brazilian National Institute, from July 1st, 2015 until June 30th, 2016. The workers were divided into three groups: non-smokers (n = 47), ex-smokers (n = 46), and smokers (n = 47). The data included: age, gender, spirometry results, and the presence or not of pulmonary pleural and/or parenchymal changes in chest radiographs. Chi-Squared test was used (p < 0,05). Results: In the smokers’ group, 83% of spirometry tests and 64% of chest x-rays were altered. In the non-smokers’ group, 19% of spirometry tests and 13% of chest x-rays were altered. In the ex-smokers’ group, 35% of spirometry tests and 30% of chest x-rays were altered. Most of the results were statistically significant. Results demonstrated a significant difference between smokers’ and non-smokers’ groups in regard to spirometric and radiographic pulmonary alterations. Ex-smokers’ and non-smokers’ group demonstrated better results when compared to the smokers’ group in relation to altered spirometry and radiograph findings. These data may contribute to planning strategies to enhance smoking cessation programs within the bauxite mining industry.

Keywords: Bauxite mining, spirometry, chest radiography, smoking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 654
121 A Study on Fuzzy Adaptive Control of Enteral Feeding Pump

Authors: Seungwoo Kim, Hyojune Chae, Yongrae Jung, Jongwook Kim

Abstract:

Recent medical studies have investigated the importance of enteral feeding and the use of feeding pumps for recovering patients unable to feed themselves or gain nourishment and nutrients by natural means. The most of enteral feeding system uses a peristaltic tube pump. A peristaltic pump is a form of positive displacement pump in which a flexible tube is progressively squeezed externally to allow the resulting enclosed pillow of fluid to progress along it. The squeezing of the tube requires a precise and robust controller of the geared motor to overcome parametric uncertainty of the pumping system which generates due to a wide variation of friction and slip between tube and roller. So, this paper proposes fuzzy adaptive controller for the robust control of the peristaltic tube pump. This new adaptive controller uses a fuzzy multi-layered architecture which has several independent fuzzy controllers in parallel, each with different robust stability area. Out of several independent fuzzy controllers, the most suited one is selected by a system identifier which observes variations in the controlled system parameter. This paper proposes a design procedure which can be carried out mathematically and systematically from the model of a controlled system. Finally, the good control performance, accurate dose rate and robust system stability, of the developed feeding pump is confirmed through experimental and clinic testing.

Keywords: Enteral Feeding Pump, Peristaltic Tube Pump, Fuzzy Adaptive Control, Fuzzy Multi-layered Controller, Look-up Table..

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
120 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 963
119 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
118 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves

Authors: Hanifeh Imanian, Morteza Kolahdoozan

Abstract:

The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.

Keywords: Dispersion, marine environment, mathematical-statistical relationship, oil spill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121
117 Toward Sustainable Building Design in Hot and Arid Climate with Reference to Riyadh City, Saudi Arabia

Authors: M. Alwetaishi

Abstract:

One of the most common and traditional strategies in architecture is to design buildings passively. This is a way to ensure low building energy reliance with respect to specific micro-building locations. There are so many ways where buildings can be designed passively, some of which are applying thermal insulation, thermal mass, courtyard and glazing to wall ratio. This research investigates the impact of each of these aspects with respect to the hot and dry climate of the capital of Riyadh. Thermal Analysis Simulation (TAS) will be utilized which is powered by Environmental Design Simulation Limited company (EDSL). It is considered as one of the most powerful tools to predict energy performance in buildings. There are three primary building designs and methods which are using courtyard, thermal mass and thermal insulation. The same building size and fabrication properties have been applied to all designs. Riyadh city which is the capital of the country was taken as a case study of the research. The research has taken into account various zone directions within the building as it has a large contribution to indoor energy and thermal performance. It is revealed that it is possible to achieve nearly zero carbon building in the hot and dry region in winter with minimum reliance on energy loads for building zones facing south, west and east. Moreover, using courtyard is more beneficial than applying construction materials into building envelope. Glazing to wall ratio is recommended to be 10% and not exceeding 30% in all directions in hot and arid regions.

Keywords: Sustainable buildings, hot and arid climates, passive building design, Saudi Arabia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1196