Search results for: adaptive robust rbf neural network approximation
4639 A One Dimensional Cdᴵᴵ Coordination Polymer: Synthesis, Structure and Properties
Authors: Z. Derikvand, M. Dusek, V. Eigner
Abstract:
One dimensional coordination polymer of Cdᴵᴵ based on pyrazine (pz) and 3-nitrophthalic acid (3-nphaH₂), namely poly[[diaqua bis(3-nitro-2-carboxylato-1-carboxylic acid)(µ₂-pyrazine) cadmium(II)]dihydrate], {[Cd(3-nphaH)2(pz)(H₂O)₂]. 2H₂O}ₙ was prepared and characterized. The asymmetric unit consists of one Cdᴵᴵ center, two (3-nphaH)– anions, two halves of two crystallographically distinct pz ligands, two coordinated and two uncoordinated water molecules. The Cdᴵᴵ cation is surrounded by four oxygen atoms from two (3-nphaH)– and two water molecules as well as two nitrogen atoms from two pz ligands in distorted octahedral geometry. Complicated hydrogen bonding network accompanied with N–O···π and C–O···π stacking interactions leads to formation of a 3D supramolecular network. Commonly, this kind of C–O–π and N–O···π interaction is detected in electron-rich CO/NO groups of (3-nphaH)– ligand and electron-deficient π-system of pyrazine.Keywords: supramolecular chemistry, Cd coordination polymer, crystal structure, 3-nithrophethalic acid
Procedia PDF Downloads 4014638 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network
Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello
Abstract:
Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.Keywords: Internet of Things, LoRa, LoRaWAN, smart cities
Procedia PDF Downloads 1484637 Controller Design Using GA for SMC Systems
Authors: Susy Thomas, Sajju Thomas, Varghese Vaidyan
Abstract:
This paper considers SMCs using linear feedback with switched gains and proposes a method which can minimize the pole perturbation. The method is able to enhance the robustness property of the controller. A pre-assigned neighborhood of the ‘nominal’ positions is assigned and the system poles are not allowed to stray out of these bounds even when parameters variations/uncertainties act upon the system. A quasi SMM is maintained within the assigned boundaries of the sliding surface.Keywords: parameter variations, pole perturbation, sliding mode control, switching surface, robust switching vector
Procedia PDF Downloads 3634636 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation
Authors: Carl van Walraven, Meltem Tuna
Abstract:
Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation
Procedia PDF Downloads 2354635 Lattice Network Model for Calculation of Eddy Current Losses in a Solid Permanent Magnet
Authors: Jan Schmidt, Pierre Köhring
Abstract:
Permanently excited machines are set up with magnets that are made of highly energetic magnetic materials. Inherently, the permanent magnets warm up while the machine is operating. With an increasing temperature, the electromotive force and hence the degree of efficiency decrease. The reasons for this are slot harmonics and distorted armature currents arising from frequency inverter operation. To prevent or avoid demagnetizing of the permanent magnets it is necessary to ensure that the magnets do not excessively heat up. Demagnetizations of permanent magnets are irreversible and a breakdown of the electrical machine is inevitable. For the design of an electrical machine, the knowledge of the behavior of heating under operating conditions of the permanent magnet is of crucial importance. Therefore, a calculation model is presented with which the machine designer can easily calculate the eddy current losses in the magnetic material.Keywords: analytical model, eddy current, losses, lattice network, permanent magnet
Procedia PDF Downloads 4204634 Integrated Location-Allocation Planning in Multi Product Multi Echelon Single Period Closed Loop Supply Chain Network Design
Authors: Santhosh Srinivasan, Vipul Garhiya, Shahul Hamid Khan
Abstract:
Environmental performance along with social performance is becoming vital factors for industries to achieve global standards. With a good environmental policy global industries are differentiating them from their competitors. This paper concentrates on multi stage, multi product and multi period manufacturing network. Single objective mathematical models for a total cost for the entire forward supply chain and reverse chain are considered. Here five different problems are considered by varying the number of facilities for illustration. M-MOGA, Shuffle Frog Leaping algorithm (SFLA) and CPLEX are used for finding the optimal solution for the mathematical model.Keywords: closed loop supply chain, genetic algorithm, random search, multi period, green supply chain
Procedia PDF Downloads 3914633 First-Principles Study of Xnmg3 (X=P, As, Sb, Bi) Antiperovskite Compounds
Authors: Kadda Amara, Mohammed Elkeurti, Mostefa Zemouli, Yassine Benallou
Abstract:
In this work, we present a study of the structural, elastic and electronic properties of the cubic antiperovskites XNMg3 (X=P, As, Sb and Bi) using the full-potential augmented plane wave plus local orbital (FP-LAPW+lo) within the Generalized Gradient Approximation based on PBEsol, Perdew 2008 functional. We determined the lattice parameters, the bulk modulus B and their pressure derivative B'. In addition, the elastic properties such as elastic constants (C11, C12 and C44), the shear modulus G, the Young modulus E, the Poisson's ratio ν and the B/G ratio are also given. For the band structure, density of states and charge density the exchange and correlation effects were treated by the Tran-Blaha modified Becke-Johnson potential to prevent the shortcoming of the underestimation of the energy gaps in both LDA and GGA approximations. The obtained results are compared to available experimental data and to other theoretical calculations.Keywords: XNMg3 compounds, GGA-PBEsol, TB-mBJ, elastic properties, electronic properties
Procedia PDF Downloads 4094632 Contractor Selection by Using Analytical Network Process
Authors: Badr A. Al-Jehani
Abstract:
Nowadays, contractor selection is a critical activity of the project owner. Selecting the right contractor is essential to the project manager for the success of the project, and this cab happens by using the proper selecting method. Traditionally, the contractor is being selected based on his offered bid price. This approach focuses only on the price factor and forgetting other essential factors for the success of the project. In this research paper, the Analytic Network Process (ANP) method is used as a decision tool model to select the most appropriate contractor. This decision-making method can help the clients who work in the construction industry to identify contractors who are capable of delivering satisfactory outcomes. Moreover, this research paper provides a case study of selecting the proper contractor among three contractors by using ANP method. The case study identifies and computes the relative weight of the eight criteria and eleven sub-criteria using a questionnaire.Keywords: contractor selection, project management, decision-making, bidding
Procedia PDF Downloads 884631 Prospective Mathematics Teachers' Content Knowledge on the Definition of Limit and Derivative
Authors: Reyhan Tekin Sitrava
Abstract:
Teachers should have robust and comprehensive content knowledge for effective mathematics teaching. It was explained that content knowledge includes knowing the facts, truths, and concepts; explaining the reasons behind these facts, truths and concepts, and making relationship between the concepts and other disciplines. By virtue of its importance, it will be significant to explore teachers and prospective teachers’ content knowledge related to variety of topics in mathematics. From this point of view, the purpose of this study was to investigate prospective mathematics teachers’ content knowledge. Particularly, it was aimed to reveal the prospective teachers’ knowledge regarding the definition of limit and derivate. To achieve the purpose and to get in-depth understanding, a qualitative case study method was used. The data was collected from 34 prospective mathematics teachers through a questionnaire containing 2 questions. The first question required the prospective teachers to define the limit and the second one required to define the derivative. The data was analyzed using content analysis method. Based on the analysis of the data, although half of the prospective teachers (50%) could write the definition of the limit, nine prospective teachers (26.5%) could not define limit. However, eight prospective teachers’ definition was regarded as partially correct. On the other hand, twenty-seven prospective teachers (79.5%) could define derivative, but seven of them (20.5%) defined it partially. According to the findings, most of the prospective teachers have robust content knowledge on limit and derivative. This result is important because definitions have a virtual role in learning and teaching of mathematics. More specifically, definition is starting point to understand the meaning of a concept. From this point of view, prospective teachers should know the definitions of the concepts to be able to teach them correctly to the students. In addition, they should have knowledge about the relationship between limit and derivative so that they can explain these concepts conceptually. Otherwise, students may memorize the rules of calculating the derivative and the limit. In conclusion, the present study showed that most of the prospective mathematics teachers had enough knowledge about the definition of derivative and limit. However, the rest of them should learn their definition conceptually. The examples of correct, partially correct, and incorrect definition of both concepts will be presented and discussed based on participants’ statements. This study has some implications for instructors. Instructors should be careful about whether students learn the definition of these concepts or not. In order to this, the instructors may give prospective teachers opportunities to discuss the definition of these concepts and the relationship between the concepts.Keywords: content knowledge, derivative, limit, prospective mathematics teachers
Procedia PDF Downloads 2214630 Identifying Concerned Citizen Communication Style During the State Parliamentary Elections in Bavaria
Authors: Volker Mittendorf, Andre Schmale
Abstract:
In this case study, we want to explore the Twitter-use of candidates during the state parliamentary elections-year 2018 in Bavaria, Germany. This paper focusses on the seven parties that probably entered the parliament. Against this background, the paper classifies the use of language as populism which itself is considered as a political communication style. First, we determine the election campaigns which started in the years 2017 on Twitter, after that we categorize the posting times of the different direct candidates in order to derive ideal types from our empirical data. Second, we have done the exploration based on the dictionary of concerned citizens which contains German political language of the right and the far right. According to that, we are analyzing the corpus with methods of text mining and social network analysis, and afterwards we display the results in a network of words of concerned citizen communication style (CCCS).Keywords: populism, communication style, election, text mining, social media
Procedia PDF Downloads 1494629 An Intelligent WSN-Based Parking Guidance System
Authors: Sheng-Shih Wang, Wei-Ting Wang
Abstract:
This paper designs an intelligent guidance system, based on wireless sensor networks, for efficient parking in parking lots. The proposed system consists of a parking space allocation subsystem, a parking space monitoring subsystem, a driving guidance subsystem, and a vehicle detection subsystem. In the system, we propose a novel and effective virtual coordinate system for sensing and displaying devices to determine the proper vacant parking space and provide the precise guidance to the driver. This study constructs a ZigBee-based wireless sensor network on Arduino platform and implements the prototype of the proposed system using Arduino-based complements. Experimental results confirm that the proposed prototype can not only work well, but also provide drivers the correct parking information.Keywords: Arduino, parking guidance, wireless sensor network, ZigBee
Procedia PDF Downloads 5754628 Prioritization of Mutation Test Generation with Centrality Measure
Authors: Supachai Supmak, Yachai Limpiyakorn
Abstract:
Mutation testing can be applied for the quality assessment of test cases. Prioritization of mutation test generation has been a critical element of the industry practice that would contribute to the evaluation of test cases. The industry generally delivers the product under the condition of time to the market and thus, inevitably sacrifices software testing tasks, even though many test cases are required for software verification. This paper presents an approach of applying a social network centrality measure, PageRank, to prioritize mutation test generation. The source code with the highest values of PageRank will be focused first when developing their test cases as these modules are vulnerable to defects or anomalies which may cause the consequent defects in many other associated modules. Moreover, the approach would help identify the reducible test cases in the test suite, still maintaining the same criteria as the original number of test cases.Keywords: software testing, mutation test, network centrality measure, test case prioritization
Procedia PDF Downloads 1124627 Structural Health Monitoring of Offshore Structures Using Wireless Sensor Networking under Operational and Environmental Variability
Authors: Srinivasan Chandrasekaran, Thailammai Chithambaram, Shihas A. Khader
Abstract:
The early-stage damage detection in offshore structures requires continuous structural health monitoring and for the large area the position of sensors will also plays an important role in the efficient damage detection. Determining the dynamic behavior of offshore structures requires dense deployment of sensors. The wired Structural Health Monitoring (SHM) systems are highly expensive and always needs larger installation space to deploy. Wireless sensor networks can enhance the SHM system by deployment of scalable sensor network, which consumes lesser space. This paper presents the results of wireless sensor network based Structural Health Monitoring method applied to a scaled experimental model of offshore structure that underwent wave loading. This method determines the serviceability of the offshore structure which is subjected to various environment loads. Wired and wireless sensors were installed in the model and the response of the scaled BLSRP model under wave loading was recorded. The wireless system discussed in this study is the Raspberry pi board with Arm V6 processor which is programmed to transmit the data acquired by the sensor to the server using Wi-Fi adapter, the data is then hosted in the webpage. The data acquired from the wireless and wired SHM systems were compared and the design of the wireless system is verified.Keywords: condition assessment, damage detection, structural health monitoring, structural response, wireless sensor network
Procedia PDF Downloads 2764626 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots
Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu
Abstract:
Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law
Procedia PDF Downloads 844625 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 184624 Analytical Study Of Holographic Polymer Dispersed Liquid Crystals Using Finite Difference Time Domain Method
Authors: N. R. Mohamad, H. Ono, H. Haroon, A. Salleh, N. M. Z. Hashim
Abstract:
In this research, we have studied and analyzed the modulation of light and liquid crystal in HPDLCs using Finite Domain Time Difference (FDTD) method. HPDLCs are modeled as a mixture of polymer and liquid crystals (LCs) that categorized as an anisotropic medium. FDTD method is directly solves Maxwell’s equation with less approximation, so this method can analyze more flexible and general approach for the arbitrary anisotropic media. As the results from FDTD simulation, the highest diffraction efficiency occurred at ±19 degrees (Bragg angle) using p polarization incident beam to Bragg grating, Q > 10 when the pitch is 1µm. Therefore, the liquid crystal is assumed to be aligned parallel to the grating constant vector during these parameters.Keywords: birefringence, diffraction efficiency, finite domain time difference, nematic liquid crystals
Procedia PDF Downloads 4604623 Nanofluids and Hybrid Nanofluids: Comparative Study of Mixed Convection in a Round Bottom Flask
Authors: Hicham Salhi
Abstract:
This research project focuses on the numerical investigation of the mixed convection of Hybrid nanofluids in a round bottom flask commonly used in organic chemistry synthesis. The aim of this study is to improve the thermal properties of the reaction medium and enhance the rate of chemical reactions by using hybrid nanofluids. The flat bottom wall of the flask is maintained at a constant high temperature, while the top, left, and right walls are kept at a low temperature. The nanofluids used in this study contain suspended Cu and Al2O3 nanoparticles in pure water. The governing equations are solved numerically using the finite-volume approach and the Boussinesq approximation. The effects of the volume fraction of nanoparticles (φ) ranging from 0% to 5%, the Rayleigh number from 103 to 106, and the type of nanofluid (Cu and Al2O3) on the flow streamlines, isotherm distribution, and Nusselt number are examined in the simulation. The results indicate that the addition of Cu and Al2O3 nanoparticles increases the mean Nusselt number, which improves heat transfer and significantly alters the flow pattern. Moreover, the mean Nusselt number increases with increasing Rayleigh number and volume fraction, with Cu- Al2O3 hybrid nanofluid producing the best results. This research project focuses on the numerical investigation of the mixed convection of Hybrid nanofluids in a round bottom flask commonly used in organic chemistry synthesis. The aim of this study is to improve the thermal properties of the reaction medium and enhance the rate of chemical reactions by using hybrid nanofluids. The flat bottom wall of the flask is maintained at a constant high temperature, while the top, left, and right walls are kept at a low temperature. The nanofluids used in this study contain suspended Cu and Al2O3 nanoparticles in pure water. The governing equations are solved numerically using the finite-volume approach and the Boussinesq approximation. The effects of the volume fraction of nanoparticles (φ) ranging from 0% to 5%, the Rayleigh number from 103 to 106, and the type of nanofluid (Cu and Al2O3) on the flow streamlines, isotherm distribution, and Nusselt number are examined in the simulation. The results indicate that the addition of Cu and Al2O3 nanoparticles increases the mean Nusselt number, which improves heat transfer and significantly alters the flow pattern. Moreover, the mean Nusselt number increases with increasing Rayleigh number and volume fraction, with Cu- Al2O3 hybrid nanofluid producing the best results.Keywords: bottom flask, mixed convection, hybrid nanofluids, numerical simulation
Procedia PDF Downloads 874622 Design of Wide-Range Variable Fractional-Delay FIR Digital Filters
Authors: Jong-Jy Shyu, Soo-Chang Pei, Yun-Da Huang
Abstract:
In this paper, design of wide-range variable fractional-delay (WR-VFD) finite impulse response (FIR) digital filters is proposed. With respect to the conventional VFD filter which is designed such that its delay is adjustable within one unit, the proposed VFD FIR filter is designed such that its delay can be tunable within a wider range. By the traces of coefficients of the fractional-delay FIR filter, it is found that the conventional method of polynomial substitution for filter coefficients no longer satisfies the design demand, and the circuits perform the sinc function (sinc converter) are added to overcome this problem. In this paper, least-squares method is adopted to design WR-VFD FIR filter. Throughout this paper, several examples will be proposed to demonstrate the effectiveness of the presented methods.Keywords: digital filter, FIR filter, variable fractional-delay (VFD) filter, least-squares approximation
Procedia PDF Downloads 4914621 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI
Authors: Ananya Ananya, Karthik Rao
Abstract:
Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net
Procedia PDF Downloads 2614620 Shock Compressibility of Iron Alloys Calculated in the Framework of Quantum-Statistical Models
Authors: Maxim A. Kadatskiy, Konstantin V. Khishchenko
Abstract:
Iron alloys are widespread components in various types of structural materials which are exposed to intensive thermal and mechanical loads. Various quantum-statistical cell models with the approximation of self-consistent field can be used for the prediction of the behavior of these materials under extreme conditions. The application of these models is even more valid, the higher the temperature and the density of matter. Results of Hugoniot calculation for iron alloys in the framework of three quantum-statistical (the Thomas–Fermi, the Thomas–Fermi with quantum and exchange corrections and the Hartree–Fock–Slater) models are presented. Results of quantum-statistical calculations are compared with results from other reliable models and available experimental data. It is revealed a good agreement between results of calculation and experimental data for terra pascal pressures. Advantages and disadvantages of this approach are shown.Keywords: alloy, Hugoniot, iron, terapascal pressure
Procedia PDF Downloads 3424619 Some Tips for Increasing Online Services Safety
Authors: Mohsen Rezaee
Abstract:
Although robust security softwares, including anti-viruses, anti-spywares, anti-spam and firewalls are amalgamated with new technologies such as safe zone, hybrid cloud, sand box and etc., and although it can be said that they have managed to prepare highest level of security against viruses, spywares and other malwares in 2012, in fact, hacker attacks to websites are increasingly becoming more and more complicated. Because of security matters developments it can be said it was expected to happen so. Here in this work we try to point out some functional and vital notes to enhance security on the web, enabling the user to browse safely in unlimited web world and to use virtual space securely.Keywords: firewalls, security, web services, computer science
Procedia PDF Downloads 4044618 An Adaptive Conversational AI Approach for Self-Learning
Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo
Abstract:
In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.Keywords: conversational AI, chatbot, dialog management, semantic analysis
Procedia PDF Downloads 1364617 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder
Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu
Abstract:
Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network
Procedia PDF Downloads 1504616 Cost-Effective, Accuracy Preserving Scalar Characterization for mmWave Transceivers
Authors: Mohammad Salah Abdullatif, Salam Hajjar, Paul Khanna
Abstract:
The development of instrument grade mmWave transceivers comes with many challenges. A general rule of thumb is that the performance of the instrument must be higher than the performance of the unit under test in terms of accuracy and stability. The calibration and characterizing of mmWave transceivers are important pillars for testing commercial products. Using a Vector Network Analyzer (VNA) with a mixer option has proven a high performance as an approach to calibrate mmWave transceivers. However, this approach comes with a high cost. In this work, a reduced-cost method to calibrate mmWave transceivers is proposed. A comparison between the proposed method and the VNA technology is provided. A demonstration of significant challenges is discussed, and an approach to meet the requirements is proposed.Keywords: mmWave transceiver, scalar characterization, coupler connection, magic tee connection, calibration, VNA, vector network analyzer
Procedia PDF Downloads 1074615 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact
Authors: Abdullah Almowanes
Abstract:
The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia
Procedia PDF Downloads 2454614 A Study of Behaviors in Using Social Networks of Corporate Personnel of Suan Sunandha Rajabhat University
Authors: Wipada Chaiwchan
Abstract:
This research aims to study behaviors in using social networks of Corporate personnel of Suan Sunandha Rajabhat University. The sample used in the study were two groups: 1) Academic Officer 70 persons and 2) Operation Officer 143 persons were used in this study. The tools in this research consisted of questionnaire which the data were analyzed by using percentage, average (X) and Standard deviation (S.D.) and Independent Sample T-Test to test the difference between the mean values obtained from two independent samples, and One-way anova to analysis of variance, and Multiple comparisons to test that the average pair of different methods by Fisher’s Least Significant Different (LSD). The study result found that the most of corporate personnel have purpose in using social network to information awareness aspect was knowledge and online conference with social media. By using the average more than 3 hours per day in everyday. Using time in working in one day and there are computers connected to the Internet at home, by using the communication in the operational processes. Behaviors using social networks in relation to gender, age, job title, department, and type of personnel. Hypothesis testing, and analysis of variance for the effects of this analysis is divided into three aspects: The use of online social networks, the attitude of the users and the security analysis has found that Corporate Personnel of Suan Sunandha Rajabhat University. Overall and specifically at the high level, and considering each item found all at a high level. By sorting of the social network (X=3.22), The attitude of the users (X= 3.06) and the security (X= 3.11). The overall behaviors using of each side (X=3.11).Keywords: social network, behaviors, social media, computer information systems
Procedia PDF Downloads 3944613 Transformation of the Traditional Landscape of Kabul Old City: A Study for Its Conservation
Authors: Mohammad Umar Azizi, Tetsuya Ando
Abstract:
This study investigates the transformation of the traditional landscape of Kabul Old City through an examination of five case study areas. Based on physical observation, three types of houses are found: traditional, mixed and modern. Firstly, characteristics of the houses are described according to construction materials and the number of stories. Secondly, internal and external factors are considered in order to implement a conservation plan. Finally, an adaptive conservation plan is suggested to protect the traditional landscape of Kabul Old City.Keywords: conservation, district 1, Kabul Old City, landscape, transformation, traditional houses
Procedia PDF Downloads 2214612 A Vision Making Exercise for Twente Region; Development and Assesment
Authors: Gelareh Ghaderi
Abstract:
the overall objective of this study is to develop two alternative plans of spatial and infrastructural development for the Netwerkstad Twente (Twente region) until 2040 and to assess the impacts of those two alternative plans. This region is located on the eastern border of the Netherlands, and it comprises of five municipalities. Based on the strengths and opportunities of the five municipalities of the Netwerkstad Twente, and in order develop the region internationally, strengthen the job market and retain skilled and knowledgeable young population, two alternative visions have been developed; environmental oriented vision, and economical oriented vision. Environmental oriented vision is based mostly on preserving beautiful landscapes. Twente would be recognized as an educational center, driven by green technologies and environment-friendly economy. Market-oriented vision is based on attracting and developing different economic activities in the region based on visions of the five cities of Netwerkstad Twente, in order to improve the competitiveness of the region in national and international scale. On the basis of the two developed visions and strategies for achieving the visions, land use and infrastructural development are modeled and assessed. Based on the SWOT analysis, criteria were formulated and employed in modeling the two contrasting land use visions by the year 2040. Land use modeling consists of determination of future land use demand, assessment of suitability land (Suitability analysis), and allocation of land uses on suitable land. Suitability analysis aims to determine the available supply of land for future development as well as assessing their suitability for specific type of land uses on the basis of the formulated set of criteria. Suitability analysis was operated using CommunityViz, a Planning Support System application for spatially explicit land suitability and allocation. Netwerkstad Twente has highly developed transportation infrastructure, consists of highways network, national road network, regional road network, street network, local road network, railway network and bike-path network. Based on the assumptions of speed limitations on different types of roads provided, infrastructure accessibility level of predicted land use parcels by four different transport modes is investigated. For evaluation of the two development scenarios, the Multi-criteria Evaluation (MCE) method is used. The first step was to determine criteria used for evaluation of each vision. All factors were categorized as economical, ecological and social. Results of Multi-criteria Evaluation show that Environmental oriented cities scenario has higher overall score. Environment-oriented scenario has impressive scores in relation to economical and ecological factors. This is due to the fact that a large percentage of housing tends towards compact housing. Twente region has immense potential, and the success of this project will define the Eastern part of The Netherlands and create a real competitive local economy with innovations and attractive environment as its backbone.Keywords: economical oriented vision, environmental oriented vision, infrastructure, land use, multi criteria assesment, vision
Procedia PDF Downloads 2274611 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 1114610 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 42