Search results for: algorithms and data structure
28789 Multivariate Statistical Process Monitoring of Base Metal Flotation Plant Using Dissimilarity Scale-Based Singular Spectrum Analysis
Authors: Syamala Krishnannair
Abstract:
A multivariate statistical process monitoring methodology using dissimilarity scale-based singular spectrum analysis (SSA) is proposed for the detection and diagnosis of process faults in the base metal flotation plant. Process faults are detected based on the multi-level decomposition of process signals by SSA using the dissimilarity structure of the process data and the subsequent monitoring of the multiscale signals using the unified monitoring index which combines T² with SPE. Contribution plots are used to identify the root causes of the process faults. The overall results indicated that the proposed technique outperformed the conventional multivariate techniques in the detection and diagnosis of the process faults in the flotation plant.Keywords: fault detection, fault diagnosis, process monitoring, dissimilarity scale
Procedia PDF Downloads 21228788 Enhancing Academic and Social Skills of Elementary School Students with Autism Spectrum Disorder by an Intensive and Comprehensive Teaching Program
Authors: Piyawan Srisuruk, Janya Boonmeeprasert, Romwarin Gamlunglert, Benjamaporn Choikhruea, Ornjira Jaraepram, Jarin Boonsuchat, Sakdadech Singkibud, Kusalaporn Chaiudomsom, Chanatiporn Chonprai, Pornchanaka Tana, Suchat Paholpak
Abstract:
Objective: To develop an Intensive and comprehensive program (ICP) for the Inclusive Class Teacher (ICPICT) to teach elementary students (ES) with ASD in order to enhance the students’ academic and social skills (ASS) and to study the effect of the teaching program. Methods: The purposive sample included 15 Khon Kaen inclusive class teachers and their 15 elementary students. All the students were diagnosed by a child and adolescent psychiatrist to have DSM-5 level 1 ASD. The study tools included 1) an ICP to teach teachers about ASD, a teaching method to enhance academic and social skills for ES with ASD, and an assessment tool to assess the teacher’s knowledge before and after the ICP. 2) an ICPICT to teach ES with ASD to enhance their ASS. The project taught 10 sessions, 3 hours each. The ICPICT had its teaching structure. Teaching media included: pictures, storytelling, songs, and plays. The authors taught and demonstrated to the participant teachers how to teach with the ICPICT until the participants could display the correct teaching method. Then the teachers taught ICPICT at school by themselves 3) an assessment tool to assess the students’ ASS before and after the completion of the study. The ICP to teach the teachers, the ICPICT, and the relevant assessment tools were developed by the authors and were adjusted until consensus agreed as appropriate for researching by 3 curriculum of teaching children with ASD experts. The data were analyzed by descriptive and analytic statistics via SPSS version 26. Results: After the briefing, the teachers increased the mean score, though not with statistical significance, of knowledge of ASD and how to teach ES with ASD on ASS (p = 0.13). Teaching ES with ASD with the ICPICT could increase the mean scores of the students’ skills in learning and expressing social emotions, relationships with a friend, transitioning, and skills in academic function 3.33, 2.27, 2.94, and 3.00 scores (full scores were 18, 12, 15 and 12, Paired T-Test p = 0.007, 0.013, 0.028 and 0.003 respectively). Conclusion: The program to teach academic and social skills simultaneously in an intensive and comprehensive structure could enhance both the academic and social skills of elementary students with ASD. Keywords: Elementary students, autism spectrum, academic skill, social skills, intensive program, comprehensive program, integration.Keywords: academica and social skills, students with autism, intensive and comprehensive, teaching program
Procedia PDF Downloads 6728787 A Metaheuristic for the Layout and Scheduling Problem in a Job Shop Environment
Authors: Hernández Eva Selene, Reyna Mary Carmen, Rivera Héctor, Barragán Irving
Abstract:
We propose an approach that jointly addresses the layout of a facility and the scheduling of a sequence of jobs. In real production, these two problems are interrelated. However, they are treated separately in the literature. Our approach is an extension of the job shop problem with transportation delay, where the location of the machines is selected among possible sites. The model minimizes the makespan, using the short processing times rule with two algorithms; the first one considers all the permutations for the location of machines, and the second only a heuristic to select some specific permutations that reduces computational time. Some instances are proved and compared with literature.Keywords: layout problem, job shop scheduling problem, concurrent scheduling and layout problem, metaheuristic
Procedia PDF Downloads 61328786 Assessing the Ecological Status of the Moroccan Mediterranean Sea: An Ecopath Modeling Study
Authors: Salma Aboussalam, Karima Khalil, Khalid Elkalay
Abstract:
In order to understand the structure, functioning, and current state of the Moroccan Mediterranean Sea ecosystem, an Ecopath mass balance model was applied. The model was based on 31 functional groups, which included 21 fish species, 7 invertebrates, 2 primary producers, and one detritus group. The trophic interactions between these groups were analyzed, and the system's average trophic transfer efficiency was found to be 23%. The total primary production and total respiration were calculated to be greater than 1, indicating that the system produces more energy than it respires. The ecosystem was found to have a high level of respiration and consumption flows, and indicators of stability and development showed low values for the Finn cycle index (13.97), system omnivory index (0.18), and average Finn path length (3.09), indicating that the ecosystem is disturbed and has a linear rather than web-like trophic structure. Keystone species were identified using the keystone index and mixed trophic impact analysis, with other demersal invertebrates, zooplankton, and cephalopods found to have a significant impact on other groups.Keywords: ecopath, food web, trophic flux, moroccan mediterranean sea
Procedia PDF Downloads 8328785 A Survey of Grammar-Based Genetic Programming and Applications
Authors: Matthew T. Wilson
Abstract:
This paper covers a selection of research utilizing grammar-based genetic programming, and illustrates how context-free grammar can be used to constrain genetic programming. It focuses heavily on grammatical evolution, one of the most popular variants of grammar-based genetic programming, and the way its operators and terminals are specialized and modified from those in genetic programming. A variety of implementations of grammatical evolution for general use are covered, as well as research each focused on using grammatical evolution or grammar-based genetic programming on a single application, or to solve a specific problem, including some of the classically considered genetic programming problems, such as the Santa Fe Trail.Keywords: context-free grammar, genetic algorithms, genetic programming, grammatical evolution
Procedia PDF Downloads 19128784 Enhancement in Seebeck Coefficient of MBE Grown Un-Doped ZnO by Thermal Annealing
Authors: M. Asghar, K. Mahmood, F. Malik, Lu Na, Y-H Xie, Yasin A. Raja, I. Ferguson
Abstract:
In this paper, we have reported an enhancement in Seebeck coefficient of un-doped zinc oxide (ZnO) grown by molecular beam epitaxy (MBE) on silicon (001) substrate by annealing treatment. The grown ZnO thin films were annealed in oxygen environment at 500°C – 800°C, keeping a step of 100°C for one hour. Room temperature Seebeck measurements showed that Seebeck coefficient and power factor increased from 222 to 510 µV/K and 8.8×10^-6 to 2.6×10^-4 Wm^-1K^-2 as annealing temperature increased from 500°C to 800°C respectively. This is the highest value of Seebeck coefficient ever reported for un-doped MBE grown ZnO according to best of our knowledge. This observation was related with the improvement of crystal structure of grown films with annealing temperature. X-ray diffraction (XRD) results demonstrated that full width half maximum (FWHM) of ZnO (002) plane decreased and crystalline size increased as the annealing temperature increased. Photoluminescence study revealed that the intensity of band edge emission increased and defect emission decreased as annealing temperature increased because the density of oxygen vacancy related donor defects decreased with annealing temperature. This argument was further justified by the Hall measurements which showed a decreasing trend of carrier concentration with annealing temperature.Keywords: ZnO, MBE, thermoelectric properties, annealing temperature, crystal structure
Procedia PDF Downloads 44828783 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel
Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury
Abstract:
Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation
Procedia PDF Downloads 49328782 Prediction of Solanum Lycopersicum Genome Encoded microRNAs Targeting Tomato Spotted Wilt Virus
Authors: Muhammad Shahzad Iqbal, Zobia Sarwar, Salah-ud-Din
Abstract:
Tomato spotted wilt virus (TSWV) belongs to the genus Tospoviruses (family Bunyaviridae). It is one of the most devastating pathogens of tomato (Solanum Lycopersicum) and heavily damages the crop yield each year around the globe. In this study, we retrieved 329 mature miRNA sequences from two microRNA databases (miRBase and miRSoldb) and checked the putative target sites in the downloaded-genome sequence of TSWV. A consensus of three miRNA target prediction tools (RNA22, miRanda and psRNATarget) was used to screen the false-positive microRNAs targeting sites in the TSWV genome. These tools calculated different target sites by calculating minimum free energy (mfe), site-complementarity, minimum folding energy and other microRNA-mRNA binding factors. R language was used to plot the predicted target-site data. All the genes having possible target sites for different miRNAs were screened by building a consensus table. Out of these 329 mature miRNAs predicted by three algorithms, only eight miRNAs met all the criteria/threshold specifications. MC-Fold and MC-Sym were used to predict three-dimensional structures of miRNAs and further analyzed in USCF chimera to visualize the structural and conformational changes before and after microRNA-mRNA interactions. The results of the current study show that the predicted eight miRNAs could further be evaluated by in vitro experiments to develop TSWV-resistant transgenic tomato plants in the future.Keywords: tomato spotted wild virus (TSWV), Solanum lycopersicum, plant virus, miRNAs, microRNA target prediction, mRNA
Procedia PDF Downloads 16228781 Exploring Counting Methods for the Vertices of Certain Polyhedra with Uncertainties
Authors: Sammani Danwawu Abdullahi
Abstract:
Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems.Keywords: counting with uncertainties, mathematical programming, optimization, vertex enumeration
Procedia PDF Downloads 36228780 The Different Ways to Describe Regular Languages by Using Finite Automata and the Changing Algorithm Implementation
Authors: Abdulmajid Mukhtar Afat
Abstract:
This paper aims at introducing finite automata theory, the different ways to describe regular languages and create a program to implement the subset construction algorithms to convert nondeterministic finite automata (NFA) to deterministic finite automata (DFA). This program is written in c++ programming language. The program reads FA 5tuples from text file and then classifies it into either DFA or NFA. For DFA, the program will read the string w and decide whether it is acceptable or not. If accepted, the program will save the tracking path and point it out. On the other hand, when the automation is NFA, the program will change the Automation to DFA so that it is easy to track and it can decide whether the w exists in the regular language or not.Keywords: finite automata, subset construction, DFA, NFA
Procedia PDF Downloads 43028779 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 11128778 Multi-Walled Carbon Nanotubes as Nucleating Agents
Authors: Rabindranath Jana, Plabani Basu, Keka Rana
Abstract:
Nucleating agents are widely used to modify the properties of various polymers. The rate of crystallization and the size of the crystals have a strong impact on mechanical and optical properties of a polymer. The addition of nucleating agents to the semi-crystalline polymers provides a surface on which the crystal growth can start easily. As a consequence, fast crystal formation will result in many small crystal domains so that the cycle times for injection molding may be reduced. Moreover, the mechanical properties e.g., modulus, tensile strength, heat distortion temperature and hardness may increase. In the present work, multi-walled carbon nanotubes (MWNTs) as nucleating agents for the crystallization of poly (e-caprolactone)diol (PCL). Thus nanocomposites of PCL filled with MWNTs were prepared by solution blending. Differential scanning calorimetry (DSC) tests were carried out to study the effect of CNTs on on-isothermal crystallization of PCL. The polarizing optical microscopy (POM), and wide-angle X-ray diffraction (WAXD) were used to study the morphology and crystal structure of PCL and its nanocomposites. It is found that MWNTs act as effective nucleating agents that significantly shorten the induction period of crystallization and however, decrease the crystallization rate of PCL, exhibiting a remarkable decrease in the Avrami exponent n, surface folding energy σe and crystallization activation energy ΔE. The carbon-based fillers act as templates for hard block chains of PCL to form an ordered structure on the surface of nanoparticles during the induction period, bringing about some increase in equilibrium temperature. The melting process of PCL and its nanocomposites are also studied; the nanocomposites exhibit two melting peaks at higher crystallization temperature which mainly refer to the melting of the crystals with different crystal sizes however, PCL shows only one melting temperature.Keywords: poly(e-caprolactone)diol, multiwalled carbon nanotubes, composite materials, nonisothermal crystallization, crystal structure, nucleation
Procedia PDF Downloads 49828777 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines
Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma
Abstract:
Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)
Procedia PDF Downloads 28028776 Growth of SWNTs from Alloy Catalyst Nanoparticles
Authors: S. Forel, F. Bouanis, L. Catala, I. Florea, V. Huc, F. Fossard, A. Loiseau, C. Cojocaru
Abstract:
Single wall carbon nanotubes are seen as excellent candidate for application on nanoelectronic devices because of their remarkable electronic and mechanical properties. These unique properties are highly dependent on their chiral structures and the diameter. Therefore, structure controlled growth of SWNTs, especially directly on final device’s substrate surface, are highly desired for the fabrication of SWNT-based electronics. In this work, we present a new approach to control the diameter of SWNTs and eventually their chirality. Because of their potential to control the SWNT’s chirality, bi-metalics nanoparticles are used to prepare alloy nanoclusters with specific structure. The catalyst nanoparticles are pre-formed following a previously described process. Briefly, the oxide surface is first covered with a SAM (self-assembled monolayer) of a pyridine-functionalized silane. Then, bi-metallic (Fe-Ru, Co-Ru and Ni-Ru) complexes are assembled by coordination bonds on the pre-formed organic SAM. The resultant alloy nanoclusters were then used to catalyze SWNTs growth on SiO2/Si substrates via CH4/H2 double hot-filament chemical vapor deposition (d-HFCVD). The microscopy and spectroscopy analysis demonstrate the high quality of SWNTs that were furthermore integrated into high-quality SWNT-FET.Keywords: nanotube, CVD, device, transistor
Procedia PDF Downloads 31928775 Increment of Panel Flutter Margin Using Adaptive Stiffeners
Authors: S. Raja, K. M. Parammasivam, V. Aghilesh
Abstract:
Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape
Procedia PDF Downloads 29928774 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 13228773 A Relational Data Base for Radiation Therapy
Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez
Abstract:
As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.Keywords: information management system, radiation oncology, medical physics, free software
Procedia PDF Downloads 24728772 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University
Authors: Komol Phaisarn, Natcha Wattanaprapa
Abstract:
This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.Keywords: security, safety, storage devices, graduate students
Procedia PDF Downloads 35628771 Kinetic and Thermodynamic Modified Pectin with Chitosan by Forming Polyelectrolyte Complex Adsorbent to Remediate of Pb(II)
Authors: Budi Hastuti, Mudasir, Dwi Siswanta, Triyono
Abstract:
Biosorbent, such as pectin and chitosan, are usually produced with low physical stability, thus the materials need to be modified. In this research, the physical characteristic of adsorbent was increased by grafting chitosan using acetate carboxymetyl chitosan (CC). Further, CC and Pectin (Pec) were crosslinked using cross-linking agent BADGE (bis phenol A diglycidyl ether) to get CC-Pec-BADGE (CPB) adsorbent. The cross-linking processes aim to form stable structure and resistance on acidic media. Furthermore, in order to increase the adsorption capacity in removing Pb(II), the adsorbent was added with NaCl to form macroporous adsorbent named CCPec-BADGE-Na (CPB-Na). The physical and chemical characteristics of the porogenic adsorbent structure were characterized by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). The adsorption parameter of CPB-Na to adsorb Pb(II) ion was determined. The kinetics and thermodynamics of the bath sorption of Pb(II) on CPB-Na adsorbent and using chitosan and pectin as a comparison were also studied. The results showed that the CPB-Na biosorbent was stable on acidic media. It had a rough and porous surface area, increased and gave higher sorption capacity for removal of Pb(II) ion. The CPB-Na 1/1 and 1/3 adsorbent adsorbed Pb(II) with adsorption capacity of 45.48 mg/g and 45.97 mg/g respectively, whereas pectin and chitosan were of 39.20 mg /g and 24.67 mg /g respectively.Keywords: porogen, Pectin, Carboxymethyl Chitosan (CC), CC- Pec-BADGE-Na
Procedia PDF Downloads 16128770 Model Updating-Based Approach for Damage Prognosis in Frames via Modal Residual Force
Authors: Gholamreza Ghodrati Amiri, Mojtaba Jafarian Abyaneh, Ali Zare Hosseinzadeh
Abstract:
This paper presents an effective model updating strategy for damage localization and quantification in frames by defining damage detection problem as an optimization issue. A generalized version of the Modal Residual Force (MRF) is employed for presenting a new damage-sensitive cost function. Then, Grey Wolf Optimization (GWO) algorithm is utilized for solving suggested inverse problem and the global extremums are reported as damage detection results. The applicability of the presented method is investigated by studying different damage patterns on the benchmark problem of the IASC-ASCE, as well as a planar shear frame structure. The obtained results emphasize good performance of the method not only in free-noise cases, but also when the input data are contaminated with different levels of noises.Keywords: frame, grey wolf optimization algorithm, modal residual force, structural damage detection
Procedia PDF Downloads 39628769 Investigation of Building Loads Effect on the Stability of Slope
Authors: Hadj Brahim Mounia, Belhamel Farid, Souici Messoud
Abstract:
In big cities, construction on sloping land (landslide) is becoming increasingly prevalent due to the unavailability of flat lands. This has created a major challenge for structural engineers with regard to structure design, due to the difficulties encountered during the implementation of projects, both for the structure and the soil. This paper analyses the effect of the number of floors of a building, founded on isolated footing on the stability of the slope using the computer code finite element PLAXIS 2D v. 8.2. The isolated footings of a building in this case were anchored in soil so that the levels of successive isolated footing realize a maximum slope of base of three for two heights, which connects the edges of the nearest footings, according to the Algerian building code DTR-BC 2.331: Shallow foundations. The results show that the embedment of the foundation into the soil reduces the value of the safety factor due to the change of the stress state of the soil by these foundations. The number of floors a building has also influences the safety factor. It has been noticed from this case of study that there is no risk of collapse of slopes for an inclination between 5° and 8°. In the case of slope inclination greater than 10° it has been noticed that the urbanization is prohibited.Keywords: isolated footings, multi-storeys building, PLAXIS 2D, slope
Procedia PDF Downloads 25528768 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment
Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah
Abstract:
Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.Keywords: response time, query, consistency, bandwidth, storage capacity, CERN
Procedia PDF Downloads 27628767 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 4828766 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece
Authors: N. Samarinas, C. Evangelides, C. Vrekos
Abstract:
The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.Keywords: classification, fuzzy logic, tolerance relations, rainfall data
Procedia PDF Downloads 31828765 Customer Satisfaction and Effective HRM Policies: Customer and Employee Satisfaction
Authors: S. Anastasiou, C. Nathanailides
Abstract:
The purpose of this study is to examine the possible link between employee and customer satisfaction. The service provided by employees, help to build a good relationship with customers and can help at increasing their loyalty. Published data for job satisfaction and indicators of customer services were gathered from relevant published works which included data from five different countries. The reviewed data indicate a significant correlation between indicators of customer and employee satisfaction in the Banking sector. There was a significant correlation between the two parameters (Pearson correlation R2=0.52 P<0.05) The reviewed data provide evidence that there is some practical evidence which links these two parameters.Keywords: job satisfaction, job performance, customer’ service, banks, human resources management
Procedia PDF Downloads 32628764 A Cooperative, Autonomous, and Continuously Operating Drone System Offered to Railway and Bridge Industry: The Business Model Behind
Authors: Paolo Guzzini, Emad Samuel M. Ebeid
Abstract:
Bridges and Railways are critical infrastructures. Ensuring safety for transports using such assets is a primary goal as it directly impacts the lives of people. By the way, improving safety could require increased investments in O&M, and therefore optimizing resource usage for asset maintenance becomes crucial. Drones4Safety (D4S), a European project funded under the H2020 Research and Innovation Action (RIA) program, aims to increase the safety of the European civil transport by building a system that relies on 3 main pillars: • Drones operating autonomously in swarm mode; • Drones able to recharge themselves using inductive phenomena produced by transmission lines in the nearby of bridges and railways assets to be inspected; • Data acquired that are analyzed with AI-empowered algorithms for defect detection This paper describes the business model behind this disruptive project. The Business Model is structured in 2 parts: • The first part is focused on the design of the business model Canvas, to explain the value provided by the Drone4safety project; • The second part aims at defining a detailed financial analysis, with the target of calculating the IRR (Internal Return rate) and the NPV (Net Present Value) of the investment in a 7 years plan (2 years to run the project + 5 years post-implementation). As to the financial analysis 2 different points of view are assumed: • Point of view of the Drones4safety company in charge of designing, producing, and selling the new system; • Point of view of the Utility company that will adopt the new system in its O&M practices; Assuming the point of view of the Drones4safety company 3 scenarios were considered: • Selling the drones > revenues will be produced by the drones’ sales; • Renting the drones > revenues will be produced by the rental of the drones (with a time-based model); • Selling the data acquisition service > revenues will be produced by the sales of pictures acquired by drones; Assuming the point of view of a utility adopting the D4S system, a 4th scenario was analyzed taking into account the decremental costs related to the change of operation and maintenance practices. The paper will show, for both companies, what are the key parameters affecting most of the business model and which are the sustainable scenarios.Keywords: a swarm of drones, AI, bridges, railways, drones4safety company, utility companies
Procedia PDF Downloads 14628763 The Univalence Principle: Equivalent Mathematical Structures Are Indistinguishable
Authors: Michael Shulman, Paige North, Benedikt Ahrens, Dmitris Tsementzis
Abstract:
The Univalence Principle is the statement that equivalent mathematical structures are indistinguishable. We prove a general version of this principle that applies to all set-based, categorical, and higher-categorical structures defined in a non-algebraic and space-based style, as well as models of higher-order theories such as topological spaces. In particular, we formulate a general definition of indiscernibility for objects of any such structure, and a corresponding univalence condition that generalizes Rezk’s completeness condition for Segal spaces and ensures that all equivalences of structures are levelwise equivalences. Our work builds on Makkai’s First-Order Logic with Dependent Sorts, but is expressed in Voevodsky’s Univalent Foundations (UF), extending previous work on the Structure Identity Principle and univalent categories in UF. This enables indistinguishability to be expressed simply as identification, and yields a formal theory that is interpretable in classical homotopy theory, but also in other higher topos models. It follows that Univalent Foundations is a fully equivalence-invariant foundation for higher-categorical mathematics, as intended by Voevodsky.Keywords: category theory, higher structures, inverse category, univalence
Procedia PDF Downloads 15728762 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate
Procedia PDF Downloads 26728761 Confinement and Storage of Cyanate in the Nano Scale via Nanolayered Structures
Authors: Osama Saber
Abstract:
Cyanate is one such anion which is produced during protein poisoning in the body and has been studied extensively in the field of biochemistry because of its toxicity. The present work aims at confinement and storage of cyanate in the nano scale. It was achieved through the intercalation of cyanate anions into nanolayerd structures of Ni-Al LDH. In addition, the effect of aging time on the intercalation of cyanate was clarified using X-ray diffraction and scanning electron microscopy. Furthermore, the effect of cations on the affinity towards the intercalation of cyanate anions inside LDH structure was studied by replacement of tetra-valent cations Ti4+ instead of the tri-vallent cations Al3+ during the preparation of LDH structure. X-ray diffraction patterns of the Ni-Ti LDH showed that the interlayer spacing was 0.73 nm. This spacing was smaller than that of Ni-Al LDH suggesting that the interlayered anions into Ni-Ti LDH are different from those into Ni-Al LDH. Thermal analyses (TG, DTG, and DTA) and Infra-red spectra revealed the presence of only cyanate anions into Ni-Ti LDH while, in the case of Ni-Al LDH, both cyanate and carbonate anions were observed. SEM images showed plate-like morphology for both Ni-Ti and Ni-Al LDHs although the shapes of their plates are not similar. Our results suggested that the LDH structures containing titanium cations have higher affinity for cyanate anions than those containing aluminum cations. Therefore, this choice for cyanate in the interlayered spacing widens the applicability to study the effect of the confinement on the toxicity of cyanate by bio researchers.Keywords: nanolayered structures, Ni-Al LDH, Ni-Ti LDH, intercalation of cyanate anions, urea hydrolysis
Procedia PDF Downloads 52128760 Evaluation of Australian Open Banking Regulation: Balancing Customer Data Privacy and Innovation
Authors: Suman Podder
Abstract:
As Australian ‘Open Banking’ allows customers to share their financial data with accredited Third-Party Providers (‘TPPs’), it is necessary to evaluate whether the regulators have achieved the balance between protecting customer data privacy and promoting data-related innovation. Recognising the need to increase customers’ influence on their own data, and the benefits of data-related innovation, the Australian Government introduced ‘Consumer Data Right’ (‘CDR’) to the banking sector through Open Banking regulation. Under Open Banking, TPPs can access customers’ banking data that allows the TPPs to tailor their products and services to meet customer needs at a more competitive price. This facilitated access and use of customer data will promote innovation by providing opportunities for new products and business models to emerge and grow. However, the success of Open Banking depends on the willingness of the customers to share their data, so the regulators have augmented the protection of data by introducing new privacy safeguards to instill confidence and trust in the system. The dilemma in policymaking is that, on the one hand, lenient data privacy laws will help the flow of information, but at the risk of individuals’ loss of privacy, on the other hand, stringent laws that adequately protect privacy may dissuade innovation. Using theoretical and doctrinal methods, this paper examines whether the privacy safeguards under Open Banking will add to the compliance burden of the participating financial institutions, resulting in the undesirable effect of stifling other policy objectives such as innovation. The contribution of this research is three-fold. In the emerging field of customer data sharing, this research is one of the few academic studies on the objectives and impact of Open Banking in the Australian context. Additionally, Open Banking is still in the early stages of implementation, so this research traces the evolution of Open Banking through policy debates regarding the desirability of customer data-sharing. Finally, the research focuses not only on the customers’ data privacy and juxtaposes it with another important objective of promoting innovation, but it also highlights the critical issues facing the data-sharing regime. This paper argues that while it is challenging to develop a regulatory framework for protecting data privacy without impeding innovation and jeopardising yet unknown opportunities, data privacy and innovation promote different aspects of customer welfare. This paper concludes that if a regulation is appropriately designed and implemented, the benefits of data-sharing will outweigh the cost of compliance with the CDR.Keywords: consumer data right, innovation, open banking, privacy safeguards
Procedia PDF Downloads 144