Search results for: 32-bit input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2143

Search results for: 32-bit input

1033 Improving Axial-Attention Network via Cross-Channel Weight Sharing

Authors: Nazmul Shahadat, Anthony S. Maida

Abstract:

In recent years, hypercomplex inspired neural networks improved deep CNN architectures due to their ability to share weights across input channels and thus improve cohesiveness of representations within the layers. The work described herein studies the effect of replacing existing layers in an Axial Attention ResNet with their quaternion variants that use cross-channel weight sharing to assess the effect on image classification. We expect the quaternion enhancements to produce improved feature maps with more interlinked representations. We experiment with the stem of the network, the bottleneck layer, and the fully connected backend by replacing them with quaternion versions. These modifications lead to novel architectures which yield improved accuracy performance on the ImageNet300k classification dataset. Our baseline networks for comparison were the original real-valued ResNet, the original quaternion-valued ResNet, and the Axial Attention ResNet. Since improvement was observed regardless of which part of the network was modified, there is a promise that this technique may be generally useful in improving classification accuracy for a large class of networks.

Keywords: axial attention, representational networks, weight sharing, cross-channel correlations, quaternion-enhanced axial attention, deep networks

Procedia PDF Downloads 61
1032 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement

Authors: Rhadinia Tayag-Relanes, Felina C. Young

Abstract:

This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the PDCA approach and record review in the gathering of data for the calendar year 2019 from August to October data of the noodle products miki, canton, and misua. Causal-comparative research was used in this study; it attempts to establish cause-effect relationships among the variables such as descriptive statistics and correlation, both were used to compute the data gathered. The study found that miki, canton, and misua production has different cycle time sets for each production and has different production outputs in every set of its production process and a different number of wastages. The company has not yet established its allowable rejection rate/ wastage; instead, this paper used a 1% wastage limit. The researcher recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators by checking their performance statistically based on the output and the machine performance; a root cause analysis for finding the solution must be conducted; and an improvement on the recording system of the input and output of the production process of noodle product should be established to eliminate the poor recording of data.

Keywords: production, continuous improvement, process, operations, PDCA

Procedia PDF Downloads 43
1031 Multi-Objective Optimization of Electric Discharge Machining for Inconel 718

Authors: Pushpendra S. Bharti, S. Maheshwari

Abstract:

Electric discharge machining (EDM) is one of the most widely used non-conventional manufacturing process to shape difficult-to-cut materials. The process yield, in terms of material removal rate, surface roughness and tool wear rate, of EDM may considerably be improved by selecting the optimal combination(s) of process parameters. This paper employs Multi-response signal-to-noise (MRSN) ratio technique to find the optimal combination(s) of the process parameters during EDM of Inconel 718. Three cases v.i.z. high cutting efficiency, high surface finish, and normal machining have been taken and the optimal combinations of input parameters have been obtained for each case. Analysis of variance (ANOVA) has been employed to find the dominant parameter(s) in all three cases. The experimental verification of the obtained results has also been made. MRSN ratio technique found to be a simple and effective multi-objective optimization technique.

Keywords: electric discharge machining, material removal rate, surface roughness, too wear rate, multi-response signal-to-noise ratio, multi response signal-to-noise ratio, optimization

Procedia PDF Downloads 340
1030 Exploring the Challenges to Usage of Building Construction Cost Indices in Ghana

Authors: Jerry Gyimah, Ernest Kissi, Safowaa Osei-Tutu, Charles Dela Adobor, Theophilus Adjei-Kumi, Ernest Osei-Tutu

Abstract:

Price fluctuation contract is imperative and of paramount essence, in the construction industry as it provides adequate relief and cushioning for changes in the prices of input resources during construction. As a result, several methods have been devised to better help in arriving at fair recompense in the event of price changes. However, stakeholders often appear not to be satisfied with the existing methods of fluctuation evaluation, ostensibly because of the challenges associated with them. The aim of this study was to identify the challenges to the usage of building construction cost indices in Ghana. Data was gathered from contractors and quantity surveying firms. The study utilized a survey questionnaire approach to elicit responses from the contractors and the consultants. Data gathered was analyzed scientifically, using the relative importance index (RII) to rank the problems associated with the existing methods. The findings revealed the following, among others, late release of data, inadequate recovery of costs, and work items of interest not included in the published indices as the main challenges of the existing methods. Findings provide useful lessons for policymakers and practitioners in decision making towards the usage and improvement of available indices.

Keywords: building construction cost indices, challenges, usage, Ghana

Procedia PDF Downloads 133
1029 Comparison of Homogeneous and Micro-Mechanical Modelling Approach for Paper Honeycomb Materials

Authors: Yiğit Gürler, Berkay Türkcan İmrağ, Taylan Güçkıran, İbrahim Şimşek, Alper Taşdemirci

Abstract:

Paper honeycombs, which is a sandwich structure, consists of two liner faces and one paper honeycomb core. These materials are widely used in the packaging industry due to their low cost, low weight, good energy absorption capabilities and easy recycling properties. However, to provide maximum protection to the products in cases such as the drop of the packaged products, the mechanical behavior of these materials should be well known at the packaging design stage. In this study, the necessary input parameters for the modeling study were obtained by performing compression tests in the through-thickness and in-plane directions of paper-based honeycomb sandwich structures. With the obtained parameters, homogeneous and micro-mechanical numerical models were developed in the Ls-Dyna environment. The material card used for the homogeneous model is MAT_MODIFIED_HONEYCOMB, and the material card used for the micromechanical model is MAT_PIECEWISE_LINEAR_PLASTICITY. As a result, the effectiveness of homogeneous and micromechanical modeling approaches for paper-based honeycomb sandwich structure was investigated using force-displacement curves. Densification points and peak points on these curves will be compared.

Keywords: environmental packaging, mechanical characterization, Ls-Dyna, sandwich structure

Procedia PDF Downloads 175
1028 Reconfigurable Consensus Achievement of Multi Agent Systems Subject to Actuator Faults in a Leaderless Architecture

Authors: F. Amirarfaei, K. Khorasani

Abstract:

In this paper, reconfigurable consensus achievement of a team of agents with marginally stable linear dynamics and single input channel has been considered. The control algorithm is based on a first order linear protocol. After occurrence of a LOE fault in one of the actuators, using the imperfect information of the effectiveness of the actuators from fault detection and identification module, the control gain is redesigned in a way to still reach consensus. The idea is based on the modeling of change in effectiveness as change of Laplacian matrix. Then as special cases of this class of systems, a team of single integrators as well as double integrators are considered and their behavior subject to a LOE fault is considered. The well-known relative measurements consensus protocol is applied to a leaderless team of single integrator as well as double integrator systems, and Gersgorin disk theorem is employed to determine whether fault occurrence has an effect on system stability and team consensus achievement or not. The analyses show that loss of effectiveness fault in actuator(s) of integrator systems affects neither system stability nor consensus achievement.

Keywords: multi-agent system, actuator fault, stability analysis, consensus achievement

Procedia PDF Downloads 318
1027 Energy Efficiency Analysis of Discharge Modes of an Adiabatic Compressed Air Energy Storage System

Authors: Shane D. Inder, Mehrdad Khamooshi

Abstract:

Efficient energy storage is a crucial factor in facilitating the uptake of renewable energy resources. Among the many options available for energy storage systems required to balance imbalanced supply and demand cycles, compressed air energy storage (CAES) is a proven technology in grid-scale applications. This paper reviews the current state of micro scale CAES technology and describes a micro-scale advanced adiabatic CAES (A-CAES) system, where heat generated during compression is stored for use in the discharge phase. It will also describe a thermodynamic model, developed in EES (Engineering Equation Solver) to evaluate the performance and critical parameters of the discharge phase of the proposed system. Three configurations are explained including: single turbine without preheater, two turbines with preheaters, and three turbines with preheaters. It is shown that the micro-scale A-CAES is highly dependent upon key parameters including; regulator pressure, air pressure and volume, thermal energy storage temperature and flow rate and the number of turbines. It was found that a micro-scale AA-CAES, when optimized with an appropriate configuration, could deliver energy input to output efficiency of up to 70%.

Keywords: CAES, adiabatic compressed air energy storage, expansion phase, micro generation, thermodynamic

Procedia PDF Downloads 297
1026 Risk Mitigation of Data Causality Analysis Requirements AI Act

Authors: Raphaël Weuts, Mykyta Petik, Anton Vedder

Abstract:

Artificial Intelligence has the potential to create and already creates enormous value in healthcare. Prescriptive systems might be able to make the use of healthcare capacity more efficient. Such systems might entail interpretations that exclude the effect of confounders that brings risks with it. Those risks might be mitigated by regulation that prevents systems entailing such risks to come to market. One modality of regulation is that of legislation, and the European AI Act is an example of such a regulatory instrument that might mitigate these risks. To assess the risk mitigation potential of the AI Act for those risks, this research focusses on a case study of a hypothetical application of medical device software that entails the aforementioned risks. The AI Act refers to the harmonised norms for already existing legislation, here being the European medical device regulation. The issue at hand is a causal link between a confounder and the value the algorithm optimises for by proxy. The research identifies where the AI Act already looks at confounders (i.a. feedback loops in systems that continue to learn after being placed on the market). The research identifies where the current proposal by parliament leaves legal uncertainty on the necessity to check for confounders that do not influence the input of the system, when the system does not continue to learn after being placed on the market. The authors propose an amendment to article 15 of the AI Act that would require high-risk systems to be developed in such a way as to mitigate risks from those aforementioned confounders.

Keywords: AI Act, healthcare, confounders, risks

Procedia PDF Downloads 240
1025 Creation of Greater Mekong Subregion Regional Competitiveness through Cluster Mapping

Authors: Danuvasin Charoen

Abstract:

This research investigates cluster development in the area called the Greater Mekong Subregion (GMS), which consists of Thailand, the People’s Republic of China (PRC), the Yunnan Province and Guangxi Zhuang Autonomous Region, Myanmar, the Lao People’s Democratic Republic (Lao PDR), Cambodia, and Vietnam. The study utilized Porter’s competitiveness theory and the cluster mapping approach to analyze the competitiveness of the region. The data collection consists of interviews, focus groups, and the analysis of secondary data. The findings identify some evidence of cluster development in the GMS; however, there is no clear indication of collaboration among the components in the clusters. GMS clusters tend to be stand-alone. The clusters in Vietnam, Lao PDR, Myanmar, and Cambodia tend to be labor intensive, whereas the clusters in Thailand and the PRC (Yunnan) have the potential to successfully develop into innovative clusters. The collaboration and integration among the clusters in the GMS area are promising, though it could take a long time. The most likely relationship between the GMS countries could be, for example, suppliers of the low-end, labor-intensive products will be located in the low income countries such as Myanmar, Lao PDR, and Cambodia, and these countries will be providing input materials for innovative clusters in the middle income countries such as Thailand and the PRC.

Keywords: cluster, GMS, competitiveness, development

Procedia PDF Downloads 248
1024 Improved Multilevel Inverter with Hybrid Power Selector and Solar Panel Cleaner in a Solar System

Authors: S. Oladoyinbo, A. A. Tijani

Abstract:

Multilevel inverters (MLI) are used at high power application based on their operation. There are 3 main types of multilevel inverters (MLI); diode clamped, flying capacitor and cascaded MLI. A cascaded MLI requires the least number of components to achieve same number of voltage levels when compared to other types of MLI while the flying capacitor has the minimum harmonic distortion. However, maximizing the advantage of cascaded H-bridge MLI and flying capacitor MLI, an improved MLI can be achieved with fewer components and better performance. In this paper an improved MLI is presented by asymmetrically integrating a flying capacitor to a cascaded H-bridge MLI also integrating an auxiliary transformer to the main transformer to decrease the total harmonics distortion (THD) with increased number of output voltage levels. Furthermore, the system is incorporated with a hybrid time and climate based solar panel cleaner and power selector which intelligently manage the input of the MLI and clean the solar panel weekly ensuring the environmental factor effect on the panel is reduced to minimum.

Keywords: multilevel inverter, total harmonics distortion, cascaded h-bridge inverter, flying capacitor

Procedia PDF Downloads 348
1023 Determining Fire Resistance of Wooden Construction Elements through Experimental Studies and Artificial Neural Network

Authors: Sakir Tasdemir, Mustafa Altin, Gamze Fahriye Pehlivan, Sadiye Didem Boztepe Erkis, Ismail Saritas, Selma Tasdemir

Abstract:

Artificial intelligence applications are commonly used in industry in many fields in parallel with the developments in the computer technology. In this study, a fire room was prepared for the resistance of wooden construction elements and with the mechanism here, the experiments of polished materials were carried out. By utilizing from the experimental data, an artificial neural network (ANN) was modeled in order to evaluate the final cross sections of the wooden samples remaining from the fire. In modelling, experimental data obtained from the fire room were used. In the system developed, the first weight of samples (ws-gr), preliminary cross-section (pcs-mm2), fire time (ft-minute), fire temperature (t-oC) as input parameters and final cross-section (fcs-mm2) as output parameter were taken. When the results obtained from ANN and experimental data are compared after making statistical analyses, the data of two groups are determined to be coherent and seen to have no meaning difference between them. As a result, it is seen that ANN can be safely used in determining cross sections of wooden materials after fire and it prevents many disadvantages.

Keywords: artificial neural network, final cross-section, fire retardant polishes, fire safety, wood resistance.

Procedia PDF Downloads 366
1022 Robustness Conditions for the Establishment of Stationary Patterns of Drosophila Segmentation Gene Expression

Authors: Ekaterina M. Myasnikova, Andrey A. Makashov, Alexander V. Spirov

Abstract:

First manifestation of a segmentation pattern in the early Drosophila development is the formation of expression domains (along with the main embryo axis) of genes belonging to the trunk gene class. Highly variable expression of genes from gap family in early Drosophila embryo is strongly reduced by the start of gastrulation due to the gene cross-regulation. The dynamics of gene expression is described by a gene circuit model for a system of four gap genes. It is shown that for the formation of a steep and stationary border by the model it is necessary that there existed a nucleus (modeling point) in which the gene expression level is constant in time and hence is described by a stationary equation. All the rest genes expressed in this nucleus are in a dynamic equilibrium. The mechanism of border formation associated with the existence of a stationary nucleus is also confirmed by the experiment. An important advantage of this approach is that properties of the system in a stationary nucleus are described by algebraic equations and can be easily handled analytically. Thus we explicitly characterize the cross-regulation properties necessary for the robustness and formulate the conditions providing this effect through the properties of the initial input data. It is shown that our formally derived conditions are satisfied for the previously published model solutions.

Keywords: drosophila, gap genes, reaction-diffusion model, robustness

Procedia PDF Downloads 347
1021 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic

Authors: Diogen Babuc

Abstract:

The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.

Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison

Procedia PDF Downloads 89
1020 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images

Authors: Shahriar Farzam, Maryam Rastgarpour

Abstract:

Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).

Keywords: curvelet transform, CBCT, image enhancement, image denoising

Procedia PDF Downloads 279
1019 Implant Operation Guiding Device for Dental Surgeons

Authors: Daniel Hyun

Abstract:

Dental implants are one of the top 3 reasons to sue a dentist for malpractice. It involves dental implant complications, usually because of the angle of the implant from the surgery. At present, surgeons usually use a 3D-printed navigator that is customized for the patient’s teeth. However, those can’t be reused for other patients as they require time. Therefore, I made a guiding device to assist the surgeon in implant operations. The surgeon can input the objective of the operation, and the device constantly checks if the surgery is heading towards the objective within the set range, telling the surgeon by manipulating the LED. We tested the prototypes’ consistency and accuracy by checking the graph, average standard deviation, and the average change of the calculated angles. The accuracy of performance was also acquired by running the device and checking the outputs. My first prototype used accelerometer and gyroscope sensors from the Arduino MPU6050 sensor, getting a changeable graph, achieving 0.0295 of standard deviations, 0.25 of average change, and 66.6% accuracy of performance. The second prototype used only the gyroscope, and it got a constant graph, achieved 0.0062 of standard deviation, 0.075 of average change, and 100% accuracy of performance, indicating that the accelerometer sensor aggravated the functionality of the device. Using the gyroscope sensor allowed it to measure the orientations of separate axes without affecting each other and also increased the stability and accuracy of the measurements.

Keywords: implant, guide, accelerometer, gyroscope, handpiece

Procedia PDF Downloads 22
1018 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.

Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset

Procedia PDF Downloads 337
1017 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 79
1016 Fire Characteristic of Commercial Retardant Flame Polycarbonate under Different Oxygen Concentration: Ignition Time and Heat Blockage

Authors: Xuelin Zhang, Shouxiang Lu, Changhai Li

Abstract:

The commercial retardant flame polycarbonate samples as the main high speed train interior carriage material with different thicknesses were investigated in Fire Propagation Apparatus with different external heat fluxes under different oxygen concentration from 12% to 40% to study the fire characteristics and quantitatively analyze the ignition time, mass loss rate and heat blockage. The additives of commercial retardant flame polycarbonate were intumescent and maintained a steady height before ignition when heated. The results showed the transformed ignition time (1/t_ig)ⁿ increased linearly with external flux under different oxygen concentration after deducting the heat blockage due to pyrolysis products, the mass loss rate was taken on linearly with external heat fluxes and the slop of the fitting line for mass loss rate and external heat fluxes decreased with the enhanced oxygen concentration and the heat blockage independent on external heat fluxes rose with oxygen concentration increasing. The inquired data as the input of the fire simulation model was the most important to be used to evaluate the fire risk of commercial retardant flame polycarbonate.

Keywords: ignition time, mass loss rate, heat blockage, fire characteristic

Procedia PDF Downloads 271
1015 Educational Experiences in Engineering in the COVID Era and Their Comparative Analysis, Spain, March to June 2020

Authors: Borja Bordel, Ramón Alcarria, Marina Pérez

Abstract:

In March 2020, in Spain, a sanitary and unexpected crisis caused by COVID-19 was declared. All of a sudden, all degrees, classes and evaluation tests and projects had to be transformed into online activities. However, the chaotic situation generated by a complex operation like that, executed without any well-established procedure, led to very different experiences and, finally, results. In this paper, we are describing three experiences in two different Universities in Madrid. On the one hand, the Technical University of Madrid, a public university with little experience in online education. On the other hand, Alfonso X el Sabio University, a private university with more than five years of experience in online teaching. All analyzed subjects were related to computer engineering. Professors and students answered a survey and personal interviews were also carried out. Besides, the professors’ workload and the students’ academic results were also compared. From the comparative analysis of all these experiences, we are extracting the most successful strategies, methodologies, and activities. The recommendations in this paper will be useful for courses during the next months when the sanitary situation is still affecting an educational organization. While, at the same time, they will be considered as input for the upcoming digitalization process of higher education.

Keywords: educational experience, online education, higher education digitalization, COVID, Spain

Procedia PDF Downloads 122
1014 Urban Resilince and Its Prioritised Components: Analysis of Industrial Township Greater Noida

Authors: N. Mehrotra, V. Ahuja, N. Sridharan

Abstract:

Resilience is an all hazard and a proactive approach, require a multidisciplinary input in the inter related variables of the city system. This research based to identify and operationalize indicators for assessment in domain of institutions, infrastructure and knowledge, all three operating in task oriented community networks. This paper gives a brief account of the methodology developed for assessment of Urban Resilience and its prioritized components for a target population within a newly planned urban complex integrating Surajpur and Kasna village as nodes. People’s perception of Urban Resilience has been examined by conducting questionnaire survey among the target population of Greater Noida. As defined by experts, Urban Resilience of a place is considered to be both a product and process of operation to regain normalcy after an event of disturbance of certain level. Based on this methodology, six indicators are identified that contribute to perception of urban resilience both as in the process of evolution and as an outcome. The relative significance of 6 R’ has also been identified. The dependency factor of various resilience indicators have been explored in this paper, which helps in generating new perspective for future research in disaster management. Based on the stated factors this methodology can be applied to assess urban resilience requirements of a well planned town, which is not an end in itself, but calls for new beginnings.

Keywords: disaster, resilience, system, urban

Procedia PDF Downloads 438
1013 Theoretical Analysis of the Solid State and Optical Characteristics of Calcium Sulpide Thin Film

Authors: Emmanuel Ifeanyi Ugwu

Abstract:

Calcium Sulphide which is one of Chalcogenide group of thin films has been analyzed in this work using a theoretical approach in which a scalar wave was propagated through the material thin film medium deposited on a glass substrate with the assumption that the dielectric medium has homogenous reference dielectric constant term, and a perturbed dielectric function, representing the deposited thin film medium on the surface of the glass substrate as represented in this work. These were substituted into a defined scalar wave equation that was solved first of all by transforming it into Volterra equation of second type and solved using the method of separation of variable on scalar wave and subsequently, Green’s function technique was introduced to obtain a model equation of wave propagating through the thin film that was invariably used in computing the propagated field, for different input wavelengths representing UV, Visible and Near-infrared regions of field considering the influence of the dielectric constants of the thin film on the propagating field. The results obtained were used in turn to compute the band gaps, solid state and optical properties of the thin film.

Keywords: scalar wave, dielectric constant, calcium sulphide, solid state, optical properties

Procedia PDF Downloads 89
1012 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph

Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.

Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction

Procedia PDF Downloads 412
1011 Law and its Implementation and Consequences in Pakistan

Authors: Amir Shafiq, Asif Shahzad, Shabbar Mehmood, Muhammad Saeed, Hamid Mustafa

Abstract:

Legislation includes the law or the statutes which is being reputable by a sovereign authority and generally can be implemented by the courts of law time to time to accomplish the objectives. Historically speaking upon the emergence of Pakistan in 1947, the intact laws of the British Raj remained effective after ablution by Islamic Ideology. Thus, there was an intention to begin the statutes book afresh for Pakistan's legal history. In consequence thereof, the process of developing detailed plans, procedures and mechanisms to ensure legislative and regulatory requirements are achieved began keeping in view the cultural values and the local customs. This article is an input to the enduring discussion about implementing rule of law in Pakistan whereas; the rule of law requires the harmony of laws which is mostly in the arrangement of codified state laws. Pakistan has legal plural civilizations where completely different and independent systems of law like the Mohammadan law, the state law and the traditional law exist. The prevailing practiced law in Pakistan is actually the traditional law though the said law is not acknowledged by the State. This caused the main problem of the rule of law in the difference between the state laws and the cultural values. These values, customs and so-called traditional laws are the main obstacle to enforce the State law in true letter and spirit which has caused dissatisfaction of the masses and distrust upon the judicial system of the country.

Keywords: consequences, implement, law, Pakistan

Procedia PDF Downloads 419
1010 Estimations of Spectral Dependence of Tropospheric Aerosol Single Scattering Albedo in Sukhothai, Thailand

Authors: Siriluk Ruangrungrote

Abstract:

Analyses of available data from MFR-7 measurement were performed and discussed on the study of tropospheric aerosol and its consequence in Thailand. Since, ASSA (w) is one of the most important parameters for a determination of aerosol effect on radioactive forcing. Here the estimation of w was directly determined in terms of the ratio of aerosol scattering optical depth to aerosol extinction optical depth (ωscat/ωext) without any utilization of aerosol computer code models. This is of benefit for providing the elimination of uncertainty causing by the modeling assumptions and the estimation of actual aerosol input data. Diurnal w of 5 cloudless-days in winter and early summer at 5 distinct wavelengths of 415, 500, 615, 673 and 870 nm with the consideration of Rayleigh scattering and atmospheric column NO2 and Ozone contents were investigated, respectively. Besides, the tendency of spectral dependence of ω representing two seasons was observed. The characteristic of spectral results reveals that during wintertime the atmosphere of the inland rural vicinity for the period of measurement possibly dominated with a lesser amount of soil dust aerosols loading than one in early summer. Hence, the major aerosol loading particularly in summer was subject to a mixture of both soil dust and biomass burning aerosols.

Keywords: aerosol scattering optical depth, aerosol extinction optical depth, biomass burning aerosol, soil dust aerosol

Procedia PDF Downloads 388
1009 Facility Data Model as Integration and Interoperability Platform

Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes

Abstract:

Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.

Keywords: airport ontology, energy management, facility data model, ontology modeling

Procedia PDF Downloads 426
1008 Bacteriological Safety of Sachet Drinking Water Sold in Benin City, Nigeria

Authors: Stephen Olusanmi Akintayo

Abstract:

Access to safe drinking water remains a major challenge in Nigeria, and where available, the quality of the water is often in doubt. An alternative to the inadequate clean drinking water is being found in treated drinking water packaged in electrically heated sealed nylon and commonly referred to as “sachet water”. “Sachet water” is a common thing in Nigeria as the selling price is within the reach of members of the low socio- economic class and the setting up of a production unit does not require huge capital input. The bacteriological quality of selected “sachet water” stored at room temperature over a period of 56 days was determined to evaluate the safety of the sachet drinking water. Test for the detection of coliform bacteria was performed, and the result showed no coliform bacteria that indicates the absence of fecal contamination throughout 56 days. Heterotrophic plate count (HPC) was done at an interval 14 days, and the samples showed HPC between 0 cfu/mL and 64 cfu/mL. The highest count was observed on day 1. The count decreased between day 1 and 28, while no growths were observed between day 42 and 56. The decrease in HPC suggested the presence of residual disinfectant in the water. The organisms isolated were identified as Staphylococcus epidermis and S. aureus. The presence of these microorganisms in sachet water is indicative for contamination during processing and handling.

Keywords: coliform, heterotrophic plate count, sachet water, Staphyloccocus aureus, Staphyloccocus epidermidis

Procedia PDF Downloads 317
1007 The Use of Building Energy Simulation Software in Case Studies: A Literature Review

Authors: Arman Ameen, Mathias Cehlin

Abstract:

The use of Building Energy Simulation (BES) software has increased in the last two decades, parallel to the development of increased computing power and easy to use software applications. This type of software is primarily used to simulate the energy use and the indoor environment for a building. The rapid development of these types of software has raised their level of user-friendliness, better parameter input options and the increased possibility of analysis, both for a single building component or an entire building. This, in turn, has led to many researchers utilizing BES software in their research in various degrees. The aim of this paper is to carry out a literature review concerning the use of the BES software IDA Indoor Climate and Energy (IDA ICE) in the scientific community. The focus of this paper will be specifically the use of the software for whole building energy simulation, number and types of articles and publications dates, the area of application, types of parameters used, the location of the studied building, type of building, type of analysis and solution methodology. Another aspect that is examined, which is of great interest, is the method of validations regarding the simulation results. The results show that there is an upgoing trend in the use of IDA ICE and that researchers use the software in their research in various degrees depending on case and aim of their research. The satisfactory level of validation of the simulations carried out in these articles varies depending on the type of article and type of analysis.

Keywords: building simulation, IDA ICE, literature review, validation

Procedia PDF Downloads 118
1006 Reformulation of Theory of Critical Distances to Predict the Strength of Notched Plain Concrete Beams under Quasi Static Loading

Authors: Radhika V., J. M. Chandra Kishen

Abstract:

The theory of critical distances (TCD), due to its appealing characteristics, has been successfully used in the past to predict the strength of brittle as well as ductile materials, weakened by the presence of stress risers under both static and fatigue loading. By utilising most of the TCD's unique features, this paper summarises an attempt for a reformulation of the point method of the TCD to predict the strength of notched plain concrete beams under mode I quasi-static loading. A zone of micro cracks, which is responsible for the non-linearity of concrete, is taken into account considering the concept of an effective elastic crack. An attempt is also made to correlate the value of the material characteristic length required for the application of TCD with the maximum aggregate size in the concrete mix, eliminating the need for any extensive experimentation prior to the application of TCD. The devised reformulation and the proposed power law based relationship is found to yield satisfactory predictions for static strength of notched plain concrete beams, with geometric dimensions of the beam, tensile strength, and maximum aggregate size of the concrete mix being the only needed input parameters.

Keywords: characteristic length, effective elastic crack, inherent material strength, modeI loading, theory of critical distances

Procedia PDF Downloads 84
1005 Image Segmentation Techniques: Review

Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo

Abstract:

Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.

Keywords: clustering-based, convolution-network, edge-based, region-growing

Procedia PDF Downloads 69
1004 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall

Procedia PDF Downloads 189