Search results for: Taguchi techniques and engineering application
13198 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations
Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri
Abstract:
Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size
Procedia PDF Downloads 22813197 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks
Authors: Ashkan Ebadi, Adam Krzyzak
Abstract:
Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.Keywords: tourism, hotel recommender system, hybrid, implicit features
Procedia PDF Downloads 27713196 A Qualitative Study of Experienced Early Childhood Teachers Resolving Workplace Challenges with Character Strengths
Authors: Michael J. Haslip
Abstract:
Character strength application improves performance and well-being in adults across industries, but the potential impact of character strength training among early childhood educators is mostly unknown. To explore how character strengths are applied by early childhood educators at work, a qualitative study was completed alongside professional development provided to a group of in-service teachers of children ages 0-5 in Philadelphia, Pennsylvania, United States. Study participants (n=17) were all female. The majority of participants were non-white, in full-time lead or assistant teacher roles, had at least ten years of experience and a bachelor’s degree. Teachers were attending professional development weekly for 2 hours over a 10-week period on the topic of social and emotional learning and child guidance. Related to this training were modules and sessions on identifying a teacher’s character strength profile using the Values in Action classification of 24 strengths (e.g., humility, perseverance) that have a scientific basis. Teachers were then asked to apply their character strengths to help resolve current workplace challenges. This study identifies which character strengths the teachers reported using most frequently and the nature of the workplace challenges being resolved in this context. The study also reports how difficult these challenges were to the teachers and their success rate at resolving workplace challenges using a character strength application plan. The study also documents how teachers’ own use of character strengths relates to their modeling of these same traits (e.g., kindness, teamwork) for children, especially when the nature of the workplace challenge directly involves the children, such as when addressing issues of classroom management and behavior. Data were collected on action plans (reflective templates) which teachers wrote to explain the work challenge they were facing, the character strengths they used to address the challenge, their plan for applying strengths to the challenge, and subsequent results. Content analysis and thematic analysis were used to investigate the research questions using approaches that included classifying, connecting, describing, and interpreting data reported by educators. Findings reveal that teachers most frequently use kindness, leadership, fairness, hope, and love to address a range of workplace challenges, ranging from low to high difficulty, involving children, coworkers, parents, and for self-management. Teachers reported a 71% success rate at fully or mostly resolving workplace challenges using the action plan method introduced during professional development. Teachers matched character strengths to challenges in different ways, with certain strengths being used mostly when the challenge involved children (love, forgiveness), others mostly with adults (bravery, teamwork), and others universally (leadership, kindness). Furthermore, teacher’s application of character strengths at work involved directly modeling character for children in 31% of reported cases. The application of character strengths among early childhood educators may play a significant role in improving teacher well-being, reducing job stress, and improving efforts to model character for young children.Keywords: character strengths, positive psychology, professional development, social-emotional learning
Procedia PDF Downloads 11313195 [Keynote Talk]: sEMG Interface Design for Locomotion Identification
Authors: Rohit Gupta, Ravinder Agarwal
Abstract:
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.Keywords: classifiers, feature selection, locomotion, sEMG
Procedia PDF Downloads 29613194 Comparati̇ve Study of Pi̇xel and Object-Based Image Classificati̇on Techni̇ques for Extracti̇on of Land Use/Land Cover Informati̇on
Authors: Mahesh Kumar Jat, Manisha Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) have been used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: remote sensing, GIS, object based, classification
Procedia PDF Downloads 13713193 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 32913192 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 22013191 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks
Abstract:
Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.Keywords: springback, cold stamping, convolutional neural networks, machine learning
Procedia PDF Downloads 15413190 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances
Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim
Abstract:
This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering
Procedia PDF Downloads 19113189 Hydrogen Sulfide Releasing Ibuprofen Derivative Can Protect Heart After Ischemia-Reperfusion
Authors: Virag Vass, Ilona Bereczki, Erzsebet Szabo, Nora Debreczeni, Aniko Borbas, Pal Herczegh, Arpad Tosaki
Abstract:
Hydrogen sulfide (H₂S) is a toxic gas, but it is produced by certain tissues in a small quantity. According to earlier studies, ibuprofen and H₂S has a protective effect against damaging heart tissue caused by ischemia-reperfusion. Recently, we have been investigating the effect of a new water-soluble H₂S releasing ibuprofen molecule administered after artificially generated ischemia-reperfusion on isolated rat hearts. The H₂S releasing property of the new ibuprofen derivative was investigated in vitro in medium derived from heart endothelial cell isolation at two concentrations. The ex vivo examinations were carried out on rat hearts. Rats were anesthetized with an intraperitoneal injection of ketamine, xylazine, and heparin. After thoracotomy, hearts were excised and placed into ice-cold perfusion buffer. Perfusion of hearts was conducted in Langendorff mode via the cannulated aorta. In our experiments, we studied the dose-effect of the H₂S releasing molecule in Langendorff-perfused hearts with the application of gradually increasing concentration of the compound (0- 20 µM). The H₂S releasing ibuprofen derivative was applied before the ischemia for 10 minutes. H₂S concentration was measured with an H₂S detecting electrochemical sensor from the coronary effluent solution. The 10 µM concentration was chosen for further experiments when the treatment with this solution was occurred after the ischemia. The release of H₂S is occurred by the hydrolyzing enzymes that are present in the heart endothelial cells. The protective effect of the new H₂S releasing ibuprofen molecule can be confirmed by the infarct sizes of hearts using the Triphenyl-tetrazolium chloride (TTC) staining method. Furthermore, we aimed to define the effect of the H₂S releasing ibuprofen derivative on autophagic and apoptotic processes in damaged hearts after investigating the molecular markers of these events by western blotting and immunohistochemistry techniques. Our further studies will include the examination of LC3I/II, p62, Beclin1, caspase-3, and other apoptotic molecules. We hope that confirming the protective effect of new H₂S releasing ibuprofen molecule will open a new possibility for the development of more effective cardioprotective agents with exerting fewer side effects. Acknowledgment: This study was supported by the grants of NKFIH- K-124719 and the European Union and the State of Hungary co- financed by the European Social Fund in the framework of GINOP- 2.3.2-15-2016-00043.Keywords: autophagy, hydrogen sulfide, ibuprofen, ischemia, reperfusion
Procedia PDF Downloads 14613188 A Survey of Novel Opportunistic Routing Protocols in Mobile Ad Hoc Networks
Authors: R. Poonkuzhali, M. Y. Sanavullah, M. R. Gurupriya
Abstract:
Opportunistic routing is used, where the network has the features like dynamic topology changes and intermittent network connectivity. In Delay Tolerant network or Disruption tolerant network opportunistic forwarding technique is widely used. The key idea of opportunistic routing is selecting forwarding nodes to forward data and coordination among these nodes to avoid duplicate transmissions. This paper gives the analysis of pros and cons of various opportunistic routing techniques used in MANET.Keywords: ETX, opportunistic routing, PSR, throughput
Procedia PDF Downloads 49813187 Normalized Enterprises Architectures: Portugal's Public Procurement System Application
Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso
Abstract:
The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms
Procedia PDF Downloads 36213186 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration
Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control
Procedia PDF Downloads 25313185 Compost Bioremediation of Oil Refinery Sludge by Using Different Manures in a Laboratory Condition
Authors: O. Ubani, H. I. Atagana, M. S. Thantsha
Abstract:
This study was conducted to measure the reduction in polycyclic aromatic hydrocarbons (PAHs) content in oil sludge by co-composting the sludge with pig, cow, horse and poultry manures under laboratory conditions. Four kilograms of soil spiked with 800 g of oil sludge was co-composted differently with each manure in a ratio of 2:1 (w/w) spiked soil:manure and wood-chips in a ratio of 2:1 (w/v) spiked soil:wood-chips. Control was set up similar as the one above but without manure. Mixtures were incubated for 10 months at room temperature. Compost piles were turned weekly and moisture level was maintained at between 50% and 70%. Moisture level, pH, temperature, CO2 evolution and oxygen consumption were measured monthly and the ash content at the end of experimentation. Bacteria capable of utilizing PAHs were isolated, purified and characterized by molecular techniques using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE), amplification of the 16S rDNA gene using the specific primers (16S-P1 PCR and 16S-P2 PCR) and the amplicons were sequenced. Extent of reduction of PAHs was measured using automated soxhlet extractor with dichloromethane as the extraction solvent coupled with gas chromatography/mass spectrometry (GC/MS). Temperature did not exceed 27.5O°C in all compost heaps, pH ranged from 5.5 to 7.8 and CO2 evolution was highest in poultry manure at 18.78 µg/dwt/day. Microbial growth and activities were enhanced. Bacteria identified were Bacillus, Arthrobacter and Staphylococcus species. Results from PAH measurements showed reduction between 77 and 99%. The results from the control experiments may be because it was invaded by fungi. Co-composting of spiked soils with animal manures enhanced the reduction in PAHs. Interestingly, all bacteria isolated and identified in this study were present in all treatments, including the control.Keywords: bioremediation, co-composting, oil refinery sludge, PAHs, bacteria spp, animal manures, molecular techniques
Procedia PDF Downloads 48013184 Exploration of Correlation between Design Principles and Elements with the Visual Aesthetic in Residential Interiors
Authors: Ikra Khan, Reenu Singh
Abstract:
Composition is essential when designing the interiors of residential spaces. The ability to adopt a unique style of using design principles and design elements is another. This research report explores how the visual aesthetic within a space is achieved through the use of design principles and design elements while maintaining a signature style. It also observes the relationship between design styles and compositions that are achieved as a result of the implementation of the principles. Information collected from books and the internet helped to understand how a composition can be achieved in residential interiors by resorting to design principles and design elements as tools for achieving an aesthetic composition. It also helped determine the results of authentic representation of design ideas and how they make one’s work exceptional. A questionnaire survey was also conducted to understand the impact of a visually aesthetic residential interior of a signature style on the lifestyle of individuals residing in them. The findings denote a pattern in the application of design principles and design elements. Individual principles and elements or a combination of the same are used to achieve an aesthetically pleasing composition. This was supported by creating CAD illustrations of two different residential projects with varying approaches and design styles. These illustrations include mood boards, 3D models, and sectional elevations as rendered views to understand the concept design and its translation via these mediums. A direct relation is observed between the application of design principles and design elements to achieve visually aesthetic residential interiors that suit an individual’s taste. These practices can be applied when designing bespoke commercial as well as industrial interiors that are suited to specific aesthetic and functional needs.Keywords: composition, design principles, elements, interiors, residential spaces
Procedia PDF Downloads 10913183 Experimental Evaluation of Electrocoagulation for Hardness Removal of Bore Well Water
Authors: Pooja Kumbhare
Abstract:
Water is an important resource for the survival of life. The inadequate availability of surface water makes people depend on ground water for fulfilling their needs. However, ground water is generally too hard to satisfy the requirements for domestic as well as industrial applications. Removal of hardness involves various techniques such as lime soda process, ion exchange, reverse osmosis, nano-filtration, distillation, and, evaporation, etc. These techniques have individual problems such as high annual operating cost, sediment formation on membrane, sludge disposal problem, etc. Electrocoagulation (EC) is being explored as modern and cost-effective technology to cope up with the growing demand of high water quality at the consumer end. In general, earlier studies on electrocoagulation for hardness removal are found to deploy batch processes. As batch processes are always inappropriate to deal with large volume of water to be treated, it is essential to develop continuous flow EC process. So, in the present study, an attempt is made to investigate continuous flow EC process for decreasing excessive hardness of bore-well water. The experimental study has been conducted using 12 aluminum electrodes (25cm*10cm, 1cm thick) provided in EC reactor with volume of 8 L. Bore well water sample, collected from a local bore-well (i.e. at – Vishrambag, Sangli; Maharashtra) having average initial hardness of 680 mg/l (Range: 650 – 700 mg/l), was used for the study. Continuous flow electrocoagulation experiments were carried out by varying operating parameters specifically reaction time (Range: 10 – 60 min), voltage (Range: 5 – 20 V), current (Range: 1 – 5A). Based on the experimental study, it is found that hardness removal to the desired extent could be achieved even for continuous flow EC reactor, so the use of it is found promising.Keywords: hardness, continuous flow EC process, aluminum electrode, optimal operating parameters
Procedia PDF Downloads 18213182 ESP: Peculiarities of Teaching Psychology in English to Russian Students
Authors: Ekaterina A. Redkina
Abstract:
The necessity and importance of teaching professionally oriented content in English needs no proof nowadays. Consequently, the ability to share personal ESP teaching experience seems of great importance. This paper is based on the 8-year ESP and EFL teaching experience at the Moscow State Linguistic University, Moscow, Russia, and presents theoretical analysis of specifics, possible problems, and perspectives of teaching Psychology in English to Russian psychology-students. The paper concerns different issues that are common for different ESP classrooms, and familiar to different teachers. Among them are: designing ESP curriculum (for psychologists in this case), finding the balance between content and language in the classroom, main teaching principles (the 4 C’s), the choice of assessment techniques and teaching material. The main objective of teaching psychology in English to Russian psychology students is developing knowledge and skills essential for professional psychologists. Belonging to international professional community presupposes high-level content-specific knowledge and skills, high level of linguistic skills and cross-cultural linguistic ability and finally high level of professional etiquette. Thus, teaching psychology in English pursues 3 main outcomes, such as content, language and professional skills. The paper provides explanation of each of the outcomes. Examples are also given. Particular attention is paid to the lesson structure, its objectives and the difference between a typical EFL and ESP lesson. There is also made an attempt to find commonalities between teaching ESP and CLIL. There is an approach that states that CLIL is more common for schools, while ESP is more common for higher education. The paper argues that CLIL methodology can be successfully used in ESP teaching and that many CLIL activities are also well adapted for professional purposes. The research paper provides insights into the process of teaching psychologists in Russia, real teaching experience and teaching techniques that have proved efficient over time.Keywords: ESP, CLIL, content, language, psychology in English, Russian students
Procedia PDF Downloads 61713181 Investigation of the Morphology of SiO2 Nano-Particles Using Different Synthesis Techniques
Authors: E. Gandomkar, S. Sabbaghi
Abstract:
In this paper, the effects of variation synthesized methods on morphology and size of silica nanostructure via modifying sol-gel and precipitation method have been investigated. Meanwhile, resulting products have been characterized by particle size analyzer, scanning electron microscopy (SEM), X-ray Diffraction (XRD) and Fourier transform infrared (FT-IR) spectra. As result, the shape of SiO2 with sol-gel and precipitation methods was spherical but with modifying sol-gel method we have been had nanolayer structure.Keywords: modified sol-gel, precipitation, nanolayer, Na2SiO3, nanoparticle
Procedia PDF Downloads 29613180 On Grammatical Metaphors: A Corpus-Based Reflection on the Academic Texts Written in the Field of Environmental Management
Authors: Masoomeh Estaji, Ahdie Tahamtani
Abstract:
Considering the necessity of conducting research and publishing academic papers during Master’s and Ph.D. programs, graduate students are in dire need of improving their writing skills through either writing courses or self-study planning. One key feature that could aid academic papers to look more sophisticated is the application of grammatical metaphors (GMs). These types of metaphors represent the ‘non-congruent’ and ‘implicit’ ways of decoding meaning through which one grammatical category is replaced by another, more implied counterpart, which can alter the readers’ understanding of the text as well. Although a number of studies have been conducted on the application of GMs across various disciplines, almost none has been devoted to the field of environmental management, and the scope of the previous studies has been relatively limited compared to the present work. In the current study, attempts were made to analyze different types of GMs used in academic papers published in top-tiered journals in the field of environmental management, and make a list of the most frequently used GMs based on their functions in this particular discipline to make the teaching of academic writing courses more explicit and the composition of academic texts more well-structured. To fulfill these purposes, a corpus-based analysis based on the two theoretical models of Martin et al. (1997) and Liardet (2014) was run. Through two stages of manual analysis and concordancers, ten recent academic articles entailing 132490 words published in two prestigious journals were precisely scrutinized. The results yielded that through the whole IMRaD sections of the articles, among all types of ideational GMs, material processes were the most frequent types. The second and the third ranks would apply to the relational and mental categories, respectively. Regarding the use of interpersonal GMs, objective expanding metaphors were the highest in number. In contrast, subjective interpersonal metaphors, either expanding or contracting, were the least significant. This would suggest that scholars in the field of Environmental Management tended to shift the focus on the main procedures and explain technical phenomenon in detail, rather than to compare and contrast other statements and subjective beliefs. Moreover, since no instances of verbal ideational metaphors were detected, it could be deduced that the act of ‘saying or articulating’ something might be against the standards of the academic genre. One other assumption would be that the application of ideational GMs is context-embedded and that the more technical they are, the least frequent they become. For further studies, it is suggested that the employment of GMs to be studied in a wider scope and other disciplines, and the third type of GMs known as ‘textual’ metaphors to be included as well.Keywords: English for specific purposes, grammatical metaphor, academic texts, corpus-based analysis
Procedia PDF Downloads 17113179 Application of Griddization Management to Construction Hazard Management
Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu
Abstract:
Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.Keywords: construction hazard, griddization computing, grid management, process
Procedia PDF Downloads 28113178 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 3713177 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques
Authors: Melese Wondatir
Abstract:
Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making
Procedia PDF Downloads 7613176 Nonhomogeneous Linear Second Order Differential Equations and Resonance through Geogebra Program
Authors: F. Maass, P. Martin, J. Olivares
Abstract:
The aim of this work is the application of the program GeoGebra in teaching the study of nonhomogeneous linear second order differential equations with constant coefficients. Different kind of functions or forces will be considered in the right hand side of the differential equations, in particular, the emphasis will be placed in the case of trigonometrical functions producing the resonance phenomena. In order to obtain this, the frequencies of the trigonometrical functions will be changed. Once the resonances appear, these have to be correlationated with the roots of the second order algebraic equation determined by the coefficients of the differential equation. In this way, the physics and engineering students will understand resonance effects and its consequences in the simplest way. A large variety of examples will be shown, using different kind of functions for the nonhomogeneous part of the differential equations.Keywords: education, geogebra, ordinary differential equations, resonance
Procedia PDF Downloads 24713175 Contribution in Fatigue Life Prediction of Composite Material
Authors: Mostefa Bendouba, Djebli Abdelkader, Abdelkrim Aid, Mohamed Benguediab
Abstract:
The damage evolution mechanism is one of the important focuses of fatigue behaviour investigation of composite materials and also is the foundation to predict fatigue life of composite structures for engineering application. This paper is dedicated to a damage investigation under two block loading cycle fatigue conditions submitted to composite material. The loading sequence effect and the influence of the cycle ratio of the first stage on the cumulative fatigue life were studied herein. Two loading sequences, i.e., high-to-low and low-to-high cases are considered in this paper. The proposed damage indicator is connected cycle by cycle to the S-N curve and the experimental results are in agreement with model expectations. Some experimental researches are used to validate this proposition.Keywords: fatigue, damage acumulation, composite, evolution
Procedia PDF Downloads 50613174 Effect of Pre Harvest Application of Amino Acids on Fruit Development of Sub-Tropical Peach
Authors: Manjot Kaur, Harminder Singh, S. K. Jawandha
Abstract:
The present investigations were carried out at Fruit Research Farm, Department of Fruit Science, Punjab Agricultural University, Ludhiana during the years 2016 and 2017, with the aim of assessing the effect of amino acids on fruit development, shoot growth and yield of peach. The six-year-old peach trees of cv. Florida Prince were sprayed with 0.25 % and 0.50 % concentrations of amino acids (Peptone P1 023), 7 and 14 days after full bloom and the sprays were repeated after 15 and 30 days. Experimental findings showed that all the amino acid treatments increased fruit growth, shoot growth, fruit retention and yield and decreased fruit drop as compared to control during both the years. Maximum fruit retention (89.29 %) and minimum fruit drop (10.71 %) was observed in T8 (2 sprays @ 0.50%). Highest mean shoot growth (113.89 cm) was recorded in T12 (3 sprays @ 0.50%) while the minimum was in control plants (88.23 cm). Fruit yield was also found to be maximum (53.92 kg/tree) under double spray treatment T8 (2 sprays @ 0.50%) of amino acids and minimum in plants sprayed with triple spray of amino acids. Fruit maturity was advanced by 3-4 days by double spray treatments of amino acids as compared to control. In brief, the application of double spray of amino acids @ 0.50% (applied 14 days after full bloom and 15 days later), was found to be best to improve the fruit growth, fruit retention and yield of Florida Prince peach under Punjab conditions.Keywords: amino acids, fruit growth, maturity, peach, shoot growth
Procedia PDF Downloads 19113173 Occupational Safety and Health in the Wake of Drones
Authors: Hoda Rahmani, Gary Weckman
Abstract:
The body of research examining the integration of drones into various industries is expanding rapidly. Despite progress made in addressing the cybersecurity concerns for commercial drones, knowledge deficits remain in determining potential occupational hazards and risks of drone use to employees’ well-being and health in the workplace. This creates difficulty in identifying key approaches to risk mitigation strategies and thus reflects the need for raising awareness among employers, safety professionals, and policymakers about workplace drone-related accidents. The purpose of this study is to investigate the prevalence of and possible risk factors for drone-related mishaps by comparing the application of drones in construction with manufacturing industries. The chief reason for considering these specific sectors is to ascertain whether there exists any significant difference between indoor and outdoor flights since most construction sites use drones outside and vice versa. Therefore, the current research seeks to examine the causes and patterns of workplace drone-related mishaps and suggest possible ergonomic interventions through data collection. Potential ergonomic practices to mitigate hazards associated with flying drones could include providing operators with professional pieces of training, conducting a risk analysis, and promoting the use of personal protective equipment. For the purpose of data analysis, two data mining techniques, the random forest and association rule mining algorithms, will be performed to find meaningful associations and trends in data as well as influential features that have an impact on the occurrence of drone-related accidents in construction and manufacturing sectors. In addition, Spearman’s correlation and chi-square tests will be used to measure the possible correlation between different variables. Indeed, by recognizing risks and hazards, occupational safety stakeholders will be able to pursue data-driven and evidence-based policy change with the aim of reducing drone mishaps, increasing productivity, creating a safer work environment, and extending human performance in safe and fulfilling ways. This research study was supported by the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant #T42OH008432.Keywords: commercial drones, ergonomic interventions, occupational safety, pattern recognition
Procedia PDF Downloads 21713172 Algorithms of ABS-Plastic Extrusion
Authors: Dmitrii Starikov, Evgeny Rybakov, Denis Zhuravlev
Abstract:
Plastic for 3D printing is very necessary material part for printers. But plastic production is technological process, which implies application of different control algorithms. Possible algorithms of providing set diameter of plastic fiber are proposed and described in the article. Results of research were proved by existing unit of filament production.Keywords: ABS-plastic, automation, control system, extruder, filament, PID-algorithm
Procedia PDF Downloads 40813171 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain
Authors: Engy S. El-Kayal, Mohamed M. S. Arafa
Abstract:
There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)
Procedia PDF Downloads 19413170 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 14113169 Engage, Connect, Empower: Agile Approach in the University Students' Education
Authors: D. Bjelica, T. Slavinski, V. Vukimrovic, D. Pavlovic, D. Bodroza, V. Dabetic
Abstract:
Traditional methods and techniques used in higher education may be significantly persuasive on the university students' perception about quality of the teaching process. Students’ satisfaction with the university experience may be affected by chosen educational approaches. Contemporary project management trends recognize agile approaches' beneficial, so modern practice highlights their usage, especially in the IT industry. A key research question concerns the possibility of applying agile methods in youth education. As agile methodology pinpoint iteratively-incremental delivery of results, its employment could be remarkably fruitful in education. This paper demonstrates the agile concept's application in the university students’ education through the continuous delivery of student solutions. Therefore, based on the fundamental values and principles of the agile manifest, paper will analyze students' performance and learned lessons in their encounter with the agile environment. The research is based on qualitative and quantitative analysis that includes sprints, as preparation and realization of student tasks in shorter iterations. Consequently, the performance of student teams will be monitored through iterations, as well as the process of adaptive planning and realization. Grounded theory methodology has been used in this research, as so as descriptive statistics and Man Whitney and Kruskal Wallis test for group comparison. Developed constructs of the model will be showcase through qualitative research, then validated through a pilot survey, and eventually tested as a concept in the final survey. The paper highlights the variability of educational curricula based on university students' feedbacks, which will be collected at the end of every sprint and indicates to university students' satisfaction inconsistency according to approaches applied in education. Values delivered by the lecturers will also be continuously monitored; thus, it will be prioritizing in order to students' requests. Minimal viable product, as the early delivery of results, will be particularly emphasized in the implementation process. The paper offers both theoretical and practical implications. This research contains exceptional lessons that may be applicable by educational institutions in curriculum creation processes, or by lecturers in curriculum design and teaching. On the other hand, they can be beneficial regarding university students' satisfaction increscent in respect of teaching styles, gained knowledge, or even educational content.Keywords: academic performances, agile, high education, university students' satisfaction
Procedia PDF Downloads 134