Search results for: feature selection methods
2935 Optimizing Resource Allocation and Indoor Location Using Bluetooth Low Energy
Authors: Néstor Álvarez-Díaz, Pino Caballero-Gil, Héctor Reboso-Morales, Francisco Martín-Fernández
Abstract:
The recent tendency of ”Internet of Things” (IoT) has developed in the last years, causing the emergence of innovative communication methods among multiple devices. The appearance of Bluetooth Low Energy (BLE) has allowed a push to IoT in relation to smartphones. In this moment, a set of new applications related to several topics like entertainment and advertisement has begun to be developed but not much has been done till now to take advantage of the potential that these technologies can offer on many business areas and in everyday tasks. In the present work, the application of BLE technology and smartphones is proposed on some business areas related to the optimization of resource allocation in huge facilities like airports. An indoor location system has been developed through triangulation methods with the use of BLE beacons. The described system can be used to locate all employees inside the building in such a way that any task can be automatically assigned to a group of employees. It should be noted that this system cannot only be used to link needs with employees according to distances, but it also takes into account other factors like occupation level or category. In addition, it has been endowed with a security system to manage business and personnel sensitive data. The efficiency of communications is another essential characteristic that has been taken into account in this work.Keywords: Bluetooth Low Energy, indoor location, resource assignment, smartphones.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16652934 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption
Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif
Abstract:
Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.Keywords: Battery endurance, software metrics, mobile application, power consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19432933 Large Strain Compression-Tension Behavior of AZ31B Rolled Sheet in the Rolling Direction
Authors: A. Yazdanmehr, H. Jahed
Abstract:
Being made with the lightest commercially available industrial metal, Magnesium (Mg) alloys are of interest for light-weighting. Expanding their application to different material processing methods requires Mg properties at large strains. Several room-temperature processes such as shot and laser peening and hole cold expansion need compressive large strain data. Two methods have been proposed in the literature to obtain the stress-strain curve at high strains: 1) anti-buckling guides and 2) small cubic samples. In this paper, an anti-buckling fixture is used with the help of digital image correlation (DIC) to obtain the compression-tension (C-T) of AZ31B-H24 rolled sheet at large strain values of up to 10.5%. The effect of the anti-bucking fixture on stress-strain curves is evaluated experimentally by comparing the results with those of the compression tests of cubic samples. For testing cubic samples, a new fixture has been designed to increase the accuracy of testing cubic samples with DIC strain measurements. Results show a negligible effect of anti-buckling on stress-strain curves, specifically at high strain values.
Keywords: Large strain, compression-tension, loading-unloading, Mg alloys.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7842932 Actionable Rules: Issues and New Directions
Authors: Harleen Kaur
Abstract:
Knowledge Discovery in Databases (KDD) is the process of extracting previously unknown, hidden and interesting patterns from a huge amount of data stored in databases. Data mining is a stage of the KDD process that aims at selecting and applying a particular data mining algorithm to extract an interesting and useful knowledge. It is highly expected that data mining methods will find interesting patterns according to some measures, from databases. It is of vital importance to define good measures of interestingness that would allow the system to discover only the useful patterns. Measures of interestingness are divided into objective and subjective measures. Objective measures are those that depend only on the structure of a pattern and which can be quantified by using statistical methods. While, subjective measures depend only on the subjectivity and understandability of the user who examine the patterns. These subjective measures are further divided into actionable, unexpected and novel. The key issues that faces data mining community is how to make actions on the basis of discovered knowledge. For a pattern to be actionable, the user subjectivity is captured by providing his/her background knowledge about domain. Here, we consider the actionability of the discovered knowledge as a measure of interestingness and raise important issues which need to be addressed to discover actionable knowledge.
Keywords: Data Mining Community, Knowledge Discovery inDatabases (KDD), Interestingness, Subjective Measures, Actionability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19422931 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigen`ere. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e. shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b + 1, it will return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character is not used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it is questionable if it works better than the other methods, from the point of view of execution time and storage space.
Keywords: Ciphering and deciphering, Authentic Algorithm, Polyalphabetic Cipher, Random Key, methods comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962930 Seed Treatment during Germination in Linseed to Overcome Salt and Drought Stresses (Linum usitatissimum L.)
Authors: Kadkhodaie A., Bagheri M.
Abstract:
Evaluation of crop plants resistance to environmental stresses specially in germination stage is a critical factor in their selection in different conditions of cultivation. Therefore use of a procedure in controllable situation can help to evaluate plants reaction to stress quickly and precisely. In order to study germination characteristics of flax in water and salinity stress conditions were conducted two laboratories experimental. The two experimental were conducted in 4-replicant completing random design for salinity and water stress. The treatment, for salinity and water stress was three potential (zero, 40, 80 mM) of NaCl and three potential (zero, -2, -4 bar) of PEG respectively. Germination percentage and rate, in addition to Radical and plumule length and dry-weight and plumule/Radical ration were measured. All of characteristics reduce under water stress conditions. salinity stress significant reduce germination rate and Radical and plumule length of flax seeds. Hydropriming and osmopriming significant increased germination rate, plumule length and plumule/Radical ration ration of flax seeds. But germination percentage and Radical and plumule dry weight significant increased only in hydropriming treat. Hydropriming and osmopriming could not be used to improved germination under saline and drought stress. But has more tolerance in salinity and drought stress in flax by less reduce in Radical and plumule length under saline and drought stress.Keywords: linseed, salt stress, water stress, seed treatment, Germination
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29112929 Phelipanche ramosa (L. - Pomel) Control in Field Tomato Crop
Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L., Carriero F., Cibelli F., Raimondo M. L., Tarantino E.
Abstract:
The tomato is a very important crop, whose cultivation in the Mediterranean basin is severely affected by the phytoparasitic weed Phelipanche ramosa. The semiarid regions of the world are considered the main areas where this parasitic weed is established causing heavy infestation as it is able to produce high numbers of seeds (up to 500,000 per plant), which remain viable for extended period (more than 20 years). In this paper the results obtained from eleven treatments in order to control this parasitic weed including chemical, agronomic, biological and biotechnological methods compared with the untreated test under two plowing depths (30 and 50 cm) are reported. The split-plot design with 3 replicates was adopted. In 2014 a trial was performed in Foggia province (southern Italy) on processing tomato (cv Docet) grown in the field infested by Phelipanche ramosa. Tomato seedlings were transplant on May 5, on a clay-loam soil. During the growing cycle of the tomato crop, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each plot was detected. At tomato harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However, among the different methods tested, some of them which Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) under deeper plowing (50 cm depth) proved to mitigate the virulence of the Phelipanche ramose attacks. It is assumed that these effects can be improved combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.
Keywords: Control methods, Phelipanche ramosa, tomato crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25452928 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings
Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank
Abstract:
Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].Keywords: data mining, protein secondary structure prediction, parallelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15962927 A Medical Images Based Retrieval System using Soft Computing Techniques
Authors: Pardeep Singh, Sanjay Sharma
Abstract:
Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.Keywords: CBIR, GA, Rough sets, CBMIR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26072926 Comparative in silico and in vitro Study of N-(1- Methyl-2-Oxo-2-N-Methyl Anilino-Ethyl) Benzene Sulfonamide and Its Analogues as an Anticancer Agent
Authors: Pamita Awasthi, Kirna, Shilpa Dogra, Manu Vatsal, Ritu Barthwal
Abstract:
Doxorubicin, also known as Adriamycin, is an anthracycline class of drug used in cancer chemotherapy. It is used in the treatment of non-Hodgkin’s lymphoma, multiple myeloma, acute leukemia, breast cancer, lung cancer, endometrium cancer and ovary cancers. It functions via intercalating DNA and ultimately killing cancer cells. The major side effects of doxorubicin are hair loss, myelosuppression, nausea & vomiting, oesophagitis, diarrhea, heart damage and liver dysfunction. The minor modifications in the structure of compound exhibit large variation in the biological activity, has prompted us to carry out the synthesis of sulfonamide derivatives. Sulfonamide is an important feature with broad spectrum of biological activity such as antiviral, antifungal, diuretics, antiinflammatory, antibacterial and anticancer activities. Structure of the synthesized compound N-(1-methyl-2-oxo-2-N-methyl anilinoethyl) benzene sulfonamide confirmed by proton nuclear magnetic resonance (1H NMR),13C NMR, Mass and FTIR spectroscopic tools to assure the position of all protons and hence stereochemistry of the molecule. Further we have reported the binding potential of synthesized sulfonamide analogues in comparison to doxorubicin drug using Auto Dock 4.2 software. Computational binding energy (B.E.) and inhibitory constant (Ki) has been evaluated for the synthesized compound in comparison of doxorubicin against Poly (dA-dT).Poly (dA-dT) and Poly (dG-dC).Poly (dG-dC) sequences. The in vitro cytotoxic study against human breast cancer cell lines confirms the better anticancer activity of the synthesized compound over currently in use anticancer drug doxorubicin. The IC50 value of the synthesized compound is 7.12 μM whereas for doxorubicin is 7.2 μM.
Keywords: Anticancer, Auto Dock, Doxorubicin, Sulfonamide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23422925 Technical, Environmental, and Financial Assessment for the Optimal Sizing of a Run-of-River Small Hydropower Project: A Case Study in Colombia
Authors: David Calderón Villegas, Thomas Kalitzky
Abstract:
Run-of-river (RoR) hydropower projects represent a viable, clean, and cost-effective alternative to dam-based plants and provide decentralized power production. However, RoR schemes’ cost-effectiveness depends on the proper selection of site and design flow, which is a challenging task because it requires multivariate analysis. In this respect, this study presents the development of an investment decision support tool for assessing the optimal size of an RoR scheme considering the technical, environmental, and cost constraints. The net present value (NPV) from a project perspective is used as an objective function for supporting the investment decision. The tool has been tested by applying it to an actual RoR project recently proposed in Colombia. The obtained results show that the optimum point in financial terms does not match the flow that maximizes energy generation from exploiting the river's available flow. For the case study, the flow that maximizes energy corresponds to a value of 5.1 m3/s. In comparison, an amount of 2.1 m3/s maximizes the investors NPV. Finally, a sensitivity analysis is performed to determine the NPV as a function of the debt rate changes and the electricity prices and the CapEx. Even for the worst-case scenario, the optimal size represents a positive business case with an NPV of 2.2 USD million and an internal rate of return (IRR) 1.5 times higher than the discount rate.
Keywords: small hydropower, renewable energy, RoR schemes, optimal sizing, financial analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6002924 Changes in the Research of Crisis
Authors: M. Mikusova
Abstract:
Thanks to the interdisciplinary nature of crises, the position of researchers in that field is rather difficult. Very often the traditional methods of research cannot be applied there. The article is aimed at the changes in crises research. It describes the substance of individual changes and emphasizes the shift in research approaches to the crisis.Keywords: crisis, change, research
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11942923 Effect of Strain and Storage Period on Some Qualitative and Quantitative Traits of Table Eggs
Authors: Hani N. Hermiz, Sukar H. Ali
Abstract:
This study include the effect of strain and storage period and their interaction on some quantitative and qualitative traits and percentages of the egg components in the eggs collected at the start of production (at age 24 weeks). Eggs were divided into three storage periods (1, 7 and 14) days under refrigerator temperature (5- 7)0C. Fifty seven eggs obtained randomly from each strain including Isa Brown and Lohman White. General Linear Model within SAS programme was used to analyze the collected data and correlations between the studied traits were calculated for each strain.Average egg weight (EW), Haugh Unit (HU), yolk index (YI), yolk % (HP), albumin % (AP) and yolk to albumin ratio (YAR) was 56.629 gm, 87.968 %, 0.493, 22.13%, 67.74% and 32.76 respectively. Egg produced from ISA Brown surpassed those produced by Lohman White significantly (P<0.01) in EW (59.337 vs. 53.921 g) and AP (68.46 vs. 67.02 %), while Lohman White surpassed ISA Brown significantly (P<0.01) in HU (91.998 against 83.939 %), YI (0.498 against 0.487), YP (22.83 against 21.44%) and YAR (34.12 against 31.40). Storage period did not have any significant effect on EW and YI. Increasing the storage period caused a significant (P<0.01) decrease in HU. A non-significant increasing in YP and significant decreasing in AP % due to increasing storage period caused a significant increasing in YAR. The interaction between strain and storage period affect EW, HU and YI significantly (P <0.01), while its effect on YP, AP and YAR was not significant. Highest and significant (P<0.01) correlation was recorded between YP with YAR (0.99) in both strains, while the lowest values were between AP with YAR and being -0.97 and -0.95 in ISA Brown and Lohman White, respectively. The conclusion: increasing storage period caused a few decreasing in egg weight and this enabling the consumer to store eggs without any damage. Because of using the albumin in many food industries, so it is very important to focus on its weight. The correlations between some of the studied traits were significant, which means that selection for any trait will improve other traits.Keywords: Quality, Quantity, Storage period, Strain, Table egg
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16592922 Numerical Modeling of Determination of in situ Rock Mass Deformation Modulus Using the Plate Load Test
Authors: A. Khodabakhshi, A. Mortazavi
Abstract:
Accurate determination of rock mass deformation modulus, as an important design parameter, is one of the most controversial issues in most engineering projects. A 3D numerical model of standard plate load test (PLT) using the FLAC3D code was carried to investigate the mechanism governing the test process. Five objectives were the focus of this study. The first goal was to employ 3D modeling in the interpretation of PLT conducted at the Bazoft dam site, Iran. The second objective was to investigate the effect of displacements measuring depth from the loading plates on the calculated moduli. The magnitude of rock mass deformation modulus calculated from PLT depends on anchor depth, and in practice, this may be a cause of error in the selection of realistic deformation modulus for the rock mass. The third goal of the study was to investigate the effect of testing plate diameter on the calculated modulus. Moreover, a comparison of the calculated modulus from ISRM formula, numerical modeling and calculated modulus from the actual PLT carried out at right abutment of the Bazoft dam site was another objective of the study. Finally, the effect of plastic strains on the calculated moduli in each of the loading-unloading cycles for three loading plates was investigated. The geometry, material properties, and boundary conditions on the constructed 3D model were selected based on the in-situ conditions of PLT at Bazoft dam site. A good agreement was achieved between numerical model results and the field tests results.
Keywords: Deformation modulus, numerical model, plate loading test, rock mass.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7722921 Sharing Tourism Experience through Social Media: Consumer's Behavioral Intention for Destination Choice
Authors: Mohammad Tipu Sultan, Farzana Sharmin, Ke Xue
Abstract:
Social media create a better opportunity for travelers to search for travel information, select destination and share their personal experiences of the travel. This study proposes a framework which describes the relationships between social media, and positive or negative tourism experience sharing impact on destination choice. To find out new trends of travelers behavioral intention, we propose an extended theoretical model, the Theory of Reasoned Action (TRA). We conducted a survey to analyze three external factors, subjective norms, and positive and negative experience influence on travel destination choice. Structural questionnaire analysis was employed to confirm the proposed research hypothesis within the relationship between consumer influences on the shared experience of social media. The results of the study confirm that sharing positive experiences influence the positive effect of destination choice, while negative experiences decrease the destination selection option. The results indicate that attitudes, subjective norms are passively influenced by shared experience. Moreover, we find that sharing live pictures of travel experiences through social media helps to reduce negative perceptions of the destination brand. This research contribution is useable to the research field as a new determination factor and the findings could be used by destination organization management (DMO) to enhancing their tourism promotion through social media.
Keywords: Destination choice, tourism experience sharing, Theory of Reasoned Action, social media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24892920 Health Monitoring of Power Transformers by Dissolved Gas Analysis using Regression Method and Study the Effect of Filtration on Oil
Authors: Anjali Chatterjee, Nirmal Kumar Roy
Abstract:
Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.
Keywords: Power Transformers, Dissolve gas Analysis, Regression method, Filtration, oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29432919 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider
Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón
Abstract:
The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.Keywords: AD0, ALICE, DCS, LHC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13982918 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation
Authors: P. Luangpaiboon, S. Boonhao
Abstract:
This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16882917 Optimization and GIS-Based Intelligent Decision Support System for Urban Transportation Systems Analysis
Authors: Mohamad K. Hasan, Hameed Al-Qaheri
Abstract:
Optimization plays an important role in most real world applications that support decision makers to take the right decision regarding the strategic directions and operations of the system they manage. Solutions for traffic management and traffic congestion problems are considered major problems that most decision making authorities for cities around the world are looking for. This review paper gives a full description of the traffic problem as part of the transportation planning process and present a view as a framework of urban transportation system analysis where the core of the system is a transportation network equilibrium model that is based on optimization techniques and that can also be used for evaluating an alternative solution or a combination of alternative solutions for the traffic congestion. Different transportation network equilibrium models are reviewed from the sequential approach to the multiclass combining trip generation, trip distribution, modal split, trip assignment and departure time model. A GIS-Based intelligent decision support system framework for urban transportation system analysis is suggested for implementation where the selection of optimized alternative solutions, single or packages, will be based on an intelligent agent rather than human being which would lead to reduction in time, cost and the elimination of the difficulty, by human being, for finding the best solution to the traffic congestion problem.Keywords: Multiclass simultaneous transportation equilibrium models, transportation planning, urban transportation systems analysis, intelligent decision support system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23012916 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches
Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez
Abstract:
Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.
Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14132915 Generating State-Based Testing Models for Object-Oriented Framework Interface Classes
Authors: Jehad Al Dallal, Paul Sorenson
Abstract:
An application framework provides a reusable design and implementation for a family of software systems. Application developers extend the framework to build their particular applications using hooks. Hooks are the places identified to show how to use and customize the framework. Hooks define the Framework Interface Classes (FICs) and the specifications of their methods. As part of the development life cycle, it is required to test the implementations of the FICs. Building a testing model to express the behavior of a class is an essential step for the generation of the class-based test cases. The testing model has to be consistent with the specifications provided for the hooks. State-based models consisting of states and transitions are testing models well suited to objectoriented software. Typically, hand-construction of a state-based model of a class behavior is expensive, error-prone, and may result in constructing an inconsistent model with the specifications of the class methods, which misleads verification results. In this paper, a technique is introduced to automatically synthesize a state-based testing model for FICs using the specifications provided for the hooks. A tool that supports the proposed technique is introduced.Keywords: Framework interface classes, hooks, state-basedtesting, testing model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12272914 Basic Research on Applying Temporary Work Engineering at the Design Phase
Authors: Jin Woong Lee, Kyuman Cho, Taehoon Kim
Abstract:
The application of constructability is increasingly required not only in the construction phase but also in the whole project stage. In particular, the proper application of construction experience and knowledge during the design phase enables the minimization of inefficiencies such as design changes and improvements in constructability during the construction phase. In order to apply knowledge effectively, engineering technology efforts should be implemented with design progress. Among many engineering technologies, engineering for temporary works, including facilities, equipment, and other related construction methods, is important to improve constructability. Therefore, as basic research, this study investigates the applicability of temporary work engineering during the design phase in the building construction industry. As a result, application of temporary work engineering has a greater impact on construction cost reduction and constructability improvement. In contrast to the existing design-bid-build method, the turn-key and CM (construct management) procurement methods currently being implemented in Korea are expected to have a significant impact on the direction of temporary work engineering. To introduce temporary work engineering, expert/professional organization training is first required, and a lack of client awareness should be preferentially improved. The results of this study are expected to be useful as reference material for the development of more effective temporary work engineering tasks and work processes in the future.
Keywords: Temporary work engineering, design phase, constructability, building construction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9742913 Spectral Investigation for Boundary Layer Flow over a Permeable Wall in the Presence of Transverse Magnetic Field
Authors: Saeed Sarabadan, Mehran Nikarya, Kouroah Parand
Abstract:
The magnetohydrodynamic (MHD) Falkner-Skan equations appear in study of laminar boundary layers flow over a wedge in presence of a transverse magnetic field. The partial differential equations of boundary layer problems in presence of a transverse magnetic field are reduced to MHD Falkner-Skan equation by similarity solution methods. This is a nonlinear ordinary differential equation. In this paper, we solve this equation via spectral collocation method based on Bessel functions of the first kind. In this approach, we reduce the solution of the nonlinear MHD Falkner-Skan equation to a solution of a nonlinear algebraic equations system. Then, the resulting system is solved by Newton method. We discuss obtained solution by studying the behavior of boundary layer flow in terms of skin friction, velocity, various amounts of magnetic field and angle of wedge. Finally, the results are compared with other methods mentioned in literature. We can conclude that the presented method has better accuracy than others.Keywords: MHD Falkner-Skan, nonlinear ODE, spectral collocation method, Bessel functions, skin friction, velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11702912 Factory Virtual Environment Development for Augmented and Virtual Reality
Authors: M. Gregor, J. Polcar, P. Horejsi, M. Simon
Abstract:
Machine visualization is an area of interest with fast and progressive development. We present a method of machine visualization which will be applicable in real industrial conditions according to current needs and demands. Real factory data were obtained in a newly built research plant. Methods described in this paper were validated on a case study. Input data were processed and the virtual environment was created. The environment contains information about dimensions, structure, disposition, and function. Hardware was enhanced by modular machines, prototypes, and accessories. We added functionalities and machines into the virtual environment. The user is able to interact with objects such as testing and cutting machines, he/she can operate and move them. Proposed design consists of an environment with two degrees of freedom of movement. Users are in touch with items in the virtual world which are embedded into the real surroundings. This paper describes development of the virtual environment. We compared and tested various options of factory layout virtualization and visualization. We analyzed possibilities of using a 3D scanner in the layout obtaining process and we also analyzed various virtual reality hardware visualization methods such as: Stereoscopic (CAVE) projection, Head Mounted Display (HMD) and augmented reality (AR) projection provided by see-through glasses.
Keywords: Augmented reality, spatial scanner, virtual environment, virtual reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20592911 Real-time Performance Study of EPA Periodic Data Transmission
Authors: Liu Ning, Zhong Chongquan, Teng Hongfei
Abstract:
EPA (Ethernet for Plant Automation) resolves the nondeterministic problem of standard Ethernet and accomplishes real-time communication by means of micro-segment topology and deterministic scheduling mechanism. This paper studies the real-time performance of EPA periodic data transmission from theoretical and experimental perspective. By analyzing information transmission characteristics and EPA deterministic scheduling mechanism, 5 indicators including delivery time, time synchronization accuracy, data-sending time offset accuracy, utilization percentage of configured timeslice and non-RTE bandwidth that can be used to specify the real-time performance of EPA periodic data transmission are presented and investigated. On this basis, the test principles and test methods of the indicators are respectively studied and some formulas for real-time performance of EPA system are derived. Furthermore, an experiment platform is developed to test the indicators of EPA periodic data transmission in a micro-segment. According to the analysis and the experiment, the methods to improve the real-time performance of EPA periodic data transmission including optimizing network structure, studying self-adaptive adjustment method of timeslice and providing data-sending time offset accuracy for configuration are proposed.
Keywords: EPA system, Industrial Ethernet, Periodic data, Real-time performance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14692910 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models
Authors: Ramin Vafadary, Maryam Khanbaghi
Abstract:
Forecasting electricity load is important for various purposes like planning, operation and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria namely, the Mean Absolute Error and Root Mean Square Error. The National Renewable Energy Laboratory (NREL) residential energy consumption data are used to train the models. The results of this study show that SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts we can improve the robustness of the models for 24 hour ahead electricity load forecasting.
Keywords: Bagging, Fbprophet, Holt-Winters, LSTM, Load Forecast, SARIMA, tensorflow probability, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4832909 Analysis and Application of in Indirect MinimumJerk Method for Higher order Differential Equation in Dynamics Optimization Systems
Authors: V. Tawiwat, T. Amornthep, P. Pnop
Abstract:
Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper considers the indirect minimum Jerk method for higher order differential equation in dynamics optimization proposes a simple yet very interesting indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of indirect jerks are found using the dynamic optimization methods together with the numerical approximation. This case considers the linear equation of a simple system, for instance, mass, spring and damping. The simple system uses two mass connected together by springs. The boundary initial is defined the fix end time and end point. The higher differential order is solved by Galerkin-s methods weight residual. As the result, the 6th higher differential order shows the faster solving time.Keywords: Optimization, Dynamic, Linear Systems, Jerks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13342908 DWT-SATS Based Detection of Image Region Cloning
Authors: Michael Zimba
Abstract:
A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency subband of the DWT of the suspicious image thereby leaving valuable information in the other three subbands, the proposed algorithm simultaneously extracts features from all the four subbands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.
Keywords: Affine Transformation, Discrete Wavelet Transform, Radix Sort, SATS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19102907 Comparison of Methods for the Detection of Biofilm Formation in Yeast and Lactic Acid Bacteria Species Isolated from Dairy Products
Authors: Goksen Arik, Mihriban Korukluoglu
Abstract:
Lactic acid bacteria (LAB) and some yeast species are common microorganisms found in dairy products and most of them are responsible for the fermentation of foods. Such cultures are isolated and used as a starter culture in the food industry because of providing standardisation of the final product during the food processing. Choice of starter culture is the most important step for the production of fermented food. Isolated LAB and yeast cultures which have the ability to create a biofilm layer can be preferred as a starter in the food industry. The biofilm formation could be beneficial to extend the period of usage time of microorganisms as a starter. On the other hand, it is an undesirable property in pathogens, since biofilm structure allows a microorganism become more resistant to stress conditions such as antibiotic presence. It is thought that the resistance mechanism could be turned into an advantage by promoting the effective microorganisms which are used in the food industry as starter culture and also which have potential to stimulate the gastrointestinal system. Development of the biofilm layer is observed in some LAB and yeast strains. The resistance could make LAB and yeast strains dominant microflora in the human gastrointestinal system; thus, competition against pathogen microorganisms can be provided more easily. Based on this circumstance, in the study, 10 LAB and 10 yeast strains were isolated from various dairy products, such as cheese, yoghurt, kefir, and cream. Samples were obtained from farmer markets and bazaars in Bursa, Turkey. As a part of this research, all isolated strains were identified and their ability of biofilm formation was detected with two different methods and compared with each other. The first goal of this research was to determine whether isolates have the potential for biofilm production, and the second was to compare the validity of two different methods, which are known as “Tube method” and “96-well plate-based method”. This study may offer an insight into developing a point of view about biofilm formation and its beneficial properties in LAB and yeast cultures used as a starter in the food industry.
Keywords: Biofilm, dairy products, lactic acid bacteria, yeast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12512906 Teachers’ Continuance Intention Towards Using Madrasati Platform: A Conceptual Framework
Authors: Fiasal Assiri, Joanna Wincenciak, David Morrison-Love
Abstract:
With the rapid spread of the COVID-19 pandemic, the Saudi government suspended students from going to school to combat the outbreak. As e-learning was not applied at all in schools, online teaching and learning have been revived in Saudi Arabia by providing a new platform called ‘Madrasati’. The Decomposed Theory of Planned Behaviour (DTPB) is used to examine individuals’ intention behaviour in many fields. Nevertheless, the factors that affect teachers’ continuance intention of the Madrasati platform have not yet been investigated. The purpose of this paper is to present a conceptual model in light with DTPB. To enhance the predictability of the model, the study incorporates other variables including learning content quality and interactivity as sub-factors under the perceived usefulness, students and government influences under the subjective norms, and technical support and prior e-learning experience under the perceived behavioural control. The model will be further validated using a mixed methods approach. Such findings would help administrators and stakeholders to understand teachers’ needs and develop new methods that might encourage teachers to continue using Madrasati effectively in their teaching.
Keywords: Madrasati, Decomposed Theory of Planned Behaviour, continuance intention, attitude, subjective norms, perceived behavioural control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501