Search results for: Biomedical data base processing
8742 Fatigue Crack Growth Behavior in Dissimilar Metal Weldment of Stainless Steel and Carbon Steel
Authors: K. Krishnaprasad, Raghu V. Prakash
Abstract:
Constant amplitude fatigue crack growth (FCG) tests were performed on dissimilar metal welded plates of Type 316L Stainless Steel (SS) and IS 2062 Grade A Carbon steel (CS). The plates were welded by TIG welding using SS E309 as electrode. FCG tests were carried on the Side Edge Notch Tension (SENT) specimens of 5 mm thickness, with crack initiator (notch) at base metal region (BM), weld metal region (WM) and heat affected zones (HAZ). The tests were performed at a test frequency of 10 Hz and at load ratios (R) of 0.1 & 0.6. FCG rate was found to increase with stress ratio for weld metals and base metals, where as in case of HAZ, FCG rates were almost equal at high ΔK. FCG rate of HAZ of stainless steel was found to be lowest at low and high ΔK. At intermediate ΔK, WM showed the lowest FCG rate. CS showed higher crack growth rate at all ΔK. However, the scatter band of data was found to be narrow. Fracture toughness (Kc) was found to vary in different locations of weldments. Kc was found lowest for the weldment and highest for HAZ of stainless steel. A novel method of characterizing the FCG behavior using an Infrared thermography (IRT) camera was attempted. By monitoring the temperature rise at the fast moving crack tip region, the amount of plastic deformation was estimated.Keywords: Dissimilar metal weld, Fatigue Crack Growth, fracture toughness, Infrared thermography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28958741 Management and Control of Industrial Effluents Discharged to Public Sewers: A Case Study
Authors: Freeman Ntuli
Abstract:
An overview of the important aspects of managing and controlling industrial effluent discharges to public sewers namely sampling, characterization, quantification and legislative controls has been presented. The findings have been validated by means of a case study covering three industrial sectors namely, tanning, textile finishing and food processing industries. Industrial effluents discharges were found to be best monitored by systematic and automatic sampling and quantified using water meter readings corrected for evaporative and consumptive losses. Based on the treatment processes employed in the public owned treatment works and the chemical oxygen demand and biochemical oxygen demand levels obtained, the effluent from all the three industrial sectors studied were found to lie in the toxic zone. Thus, physico-chemical treatment of these effluents is required to bring them into the biodegradable zone. KL values (quoted to base e) were greater than 0.50 day-1 compared to 0.39 day-1 for typical municipality wastewater.Keywords: biodegradability, industrial effluent, pollution control, public sewers
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31788740 Ice Load Measurements on Known Structures Using Image Processing Methods
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.
Keywords: Camera calibration, Ice detection, ice load measurements, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12618739 Noninvasive Brain-Machine Interface to Control Both Mecha TE Robotic Hands Using Emotiv EEG Neuroheadset
Authors: Adrienne Kline, Jaydip Desai
Abstract:
Electroencephalogram (EEG) is a noninvasive technique that registers signals originating from the firing of neurons in the brain. The Emotiv EEG Neuroheadset is a consumer product comprised of 14 EEG channels and was used to record the reactions of the neurons within the brain to two forms of stimuli in 10 participants. These stimuli consisted of auditory and visual formats that provided directions of ‘right’ or ‘left.’ Participants were instructed to raise their right or left arm in accordance with the instruction given. A scenario in OpenViBE was generated to both stimulate the participants while recording their data. In OpenViBE, the Graz Motor BCI Stimulator algorithm was configured to govern the duration and number of visual stimuli. Utilizing EEGLAB under the cross platform MATLAB®, the electrodes most stimulated during the study were defined. Data outputs from EEGLAB were analyzed using IBM SPSS Statistics® Version 20. This aided in determining the electrodes to use in the development of a brain-machine interface (BMI) using real-time EEG signals from the Emotiv EEG Neuroheadset. Signal processing and feature extraction were accomplished via the Simulink® signal processing toolbox. An Arduino™ Duemilanove microcontroller was used to link the Emotiv EEG Neuroheadset and the right and left Mecha TE™ Hands.
Keywords: Brain-machine interface, EEGLAB, emotiv EEG neuroheadset, openViBE, simulink.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28058738 Duration Patterns of English by Native British Speakers and Mandarin ESL Speakers
Authors: Chen Bingru
Abstract:
This study is intended to describe and analyze the effects of polysyllabic shortening and word or phrase boundary on the duration patterns of spoken utterances by Mandarin learners of English in comparison with native speakers of English. To investigate the relative contribution of these effects, two production experiments were conducted. The study included 11 native British English speakers and 20 Mandarin learners of English who were asked to produce four sets of tokens consisting of a mono-syllabic base form, disyllabic, and trisyllabic words derived from the base by the addition of suffixes, and a set of short sentences with a particular combination of phrase size, stress pattern, and boundary location. The duration of words and segments was measured, and results from the data analysis suggest that the amount of polysyllabic shortening and the effect of word or phrase position are likely to affect a Chinese accent for Mandarin ESL speakers. This study sheds light on research on the duration patterns of language by demonstrating the effect of duration-related factors on the foreign accent of Mandarin ESL speakers. It can also benefit both L2 learners and language teachers by increasing their sensitivity to the duration differences and difficulties experienced by L2 learners of English. An understanding of the amount of polysyllabic shortening and the effect of position in words and phrase on syllable duration can also facilitate L2 teachers to establish priorities for teaching pronunciation to ESL learners.
Keywords: Duration patterns, Chinese accent, Mandarin ESL speakers, polysyllabic shortening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7498737 From Industry 4.0 to Agriculture 4.0: A Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability
Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli
Abstract:
Agri-food value chain involves various stakeholders with different roles. All of them abide by national and international rules and leverage marketing strategies to advance their products. Food products and related processing phases carry with it a big mole of data that are often not used to inform final customer. Some data, if fittingly identified and used, can enhance the single company, and/or the all supply chain creates a math between marketing techniques and voluntary traceability strategies. Moreover, as of late, the world has seen buying-models’ modification: customer is careful on wellbeing and food quality. Food citizenship and food democracy was born, leveraging on transparency, sustainability and food information needs. Internet of Things (IoT) and Analytics, some of the innovative technologies of Industry 4.0, have a significant impact on market and will act as a main thrust towards a genuine ‘4.0 change’ for agriculture. But, realizing a traceability system is not simple because of the complexity of agri-food supply chain, a lot of actors involved, different business models, environmental variations impacting products and/or processes, and extraordinary climate changes. In order to give support to the company involved in a traceability path, starting from business model analysis and related business process a Framework to Manage Product Data in Agri-Food Supply Chain for Voluntary Traceability was conceived. Studying each process task and leveraging on modeling techniques lead to individuate information held by different actors during agri-food supply chain. IoT technologies for data collection and Analytics techniques for data processing supply information useful to increase the efficiency intra-company and competitiveness in the market. The whole information recovered can be shown through IT solutions and mobile application to made accessible to the company, the entire supply chain and the consumer with the view to guaranteeing transparency and quality.
Keywords: Agriculture 4.0, agri-food supply chain, Industry 4.0, voluntary traceability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23508736 Greek Compounds: A Challenging Case for the Parsing Techniques of PC-KIMMO v.2
Authors: Angela Ralli, Eleni Galiotou
Abstract:
In this paper we describe the recognition process of Greek compound words using the PC-KIMMO software. We try to show certain limitations of the system with respect to the principles of compound formation in Greek. Moreover, we discuss the computational processing of phenomena such as stress and syllabification which are indispensable for the analysis of such constructions and we try to propose linguistically-acceptable solutions within the particular system.
Keywords: Morpho-phonological parsing, compound words, two-level morphology, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16108735 OPTIMAL Placement of FACTS Devices by Genetic Algorithm for the Increased Load Ability of a Power System
Authors: A. B.Bhattacharyya, B. S.K.Goswami
Abstract:
This paper presents Genetic Algorithm (GA) based approach for the allocation of FACTS (Flexible AC Transmission System) devices for the improvement of Power transfer capacity in an interconnected Power System. The GA based approach is applied on IEEE 30 BUS System. The system is reactively loaded starting from base to 200% of base load. FACTS devices are installed in the different locations of the power system and system performance is noticed with and without FACTS devices. First, the locations, where the FACTS devices to be placed is determined by calculating active and reactive power flows in the lines. Genetic Algorithm is then applied to find the amount of magnitudes of the FACTS devices. This approach of GA based placement of FACTS devices is tremendous beneficial both in terms of performance and economy is clearly observed from the result obtained.Keywords: FACTS Devices, Line Power Flow, OptimalLocation of FACTS Devices, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41388734 Finding an Optimized Discriminate Function for Internet Application Recognition
Authors: E. Khorram, S.M. Mirzababaei
Abstract:
Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.
Keywords: Data Mining, Firewall, Optimization, Packetclassification, Statistical Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14108733 Tree Based Data Fusion Clustering Routing Algorithm for Illimitable Network Administration in Wireless Sensor Network
Authors: Y. Harold Robinson, M. Rajaram, E. Golden Julie, S. Balaji
Abstract:
In wireless sensor networks, locality and positioning information can be captured using Global Positioning System (GPS). This message can be congregated initially from spot to identify the system. Users can retrieve information of interest from a wireless sensor network (WSN) by injecting queries and gathering results from the mobile sink nodes. Routing is the progression of choosing optimal path in a mobile network. Intermediate node employs permutation of device nodes into teams and generating cluster heads that gather the data from entity cluster’s node and encourage the collective data to base station. WSNs are widely used for gathering data. Since sensors are power-constrained devices, it is quite vital for them to reduce the power utilization. A tree-based data fusion clustering routing algorithm (TBDFC) is used to reduce energy consumption in wireless device networks. Here, the nodes in a tree use the cluster formation, whereas the elevation of the tree is decided based on the distance of the member nodes to the cluster-head. Network simulation shows that this scheme improves the power utilization by the nodes, and thus considerably improves the lifetime.
Keywords: WSN, TBDFC, LEACH, PEGASIS, TREEPSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11168732 Detecting Circles in Image Using Statistical Image Analysis
Authors: Fathi M. O. Hamed, Salma F. Elkofhaifee
Abstract:
The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data.Keywords: Image processing, median filter, projection, scalespace, segmentation, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18368731 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study
Authors: Si Mon Kueh, Tom J. Kazmierski
Abstract:
There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.Keywords: Artificial Neural Networks, bit-serial neural processor, FPGA, Neural Processing Element.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15738730 Effect of Filler Metal Diameter on Weld Joint of Carbon Steel SA516 Gr 70 and Filler Metal SFA 5.17 in Submerged Arc Welding SAW
Authors: A. Nait Salah, M. Kaddami
Abstract:
This work describes an investigation on the effect of filler metals diameter to weld joint, and low alloy carbon steel A516 Grade 70 is the base metal. Commercially SA516 Grade70 is frequently used for the manufacturing of pressure vessels, boilers and storage tank, etc. In fabrication industry, the hardness of the weld joint is between the important parameters to check, after heat treatment of the weld. Submerged arc welding (SAW) is used with two filler metal diameters, and this solid wire electrode is used for SAW non-alloy and for fine grain steels (SFA 5.17). The different diameters were selected (Ø = 2.4 mm and Ø = 4 mm) to weld two specimens. Both specimens were subjected to the same preparation conditions, heat treatment, macrograph, metallurgy micrograph, and micro-hardness test. Samples show almost similar structure with highest hardness. It is important to indicate that the thickness used in the base metal is 22 mm, and all specifications, preparation and controls were according to the ASME section IX. It was observed that two different filler metal diameters performed on two similar specimens demonstrated that the mechanical property (hardness) increases with decreasing diameter. It means that even the heat treatment has the same effect with the same conditions, the filler metal diameter insures a depth weld penetration and better homogenization. Hence, the SAW welding technique mentioned in the present study is favorable to implicate for the industry using the small filler metal diameter.
Keywords: ASME, base metal, filler metal, micro-hardness test, submerged arc welding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8638729 CSOLAP (Continuous Spatial On-Line Analytical Processing)
Authors: Taher Omran Ahmed, Abdullatif Mihdi Buras
Abstract:
Decision support systems are usually based on multidimensional structures which use the concept of hypercube. Dimensions are the axes on which facts are analyzed and form a space where a fact is located by a set of coordinates at the intersections of members of dimensions. Conventional multidimensional structures deal with discrete facts linked to discrete dimensions. However, when dealing with natural continuous phenomena the discrete representation is not adequate. There is a need to integrate spatiotemporal continuity within multidimensional structures to enable analysis and exploration of continuous field data. Research issues that lead to the integration of spatiotemporal continuity in multidimensional structures are numerous. In this paper, we discuss research issues related to the integration of continuity in multidimensional structures, present briefly a multidimensional model for continuous field data. We also define new aggregation operations. The model and the associated operations and measures are validated by a prototype.Keywords: Continuous Data, Data warehousing, DecisionSupport, SOLAP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15958728 Analysis of Electrocardiograph (ECG) Signal for the Detection of Abnormalities Using MATLAB
Authors: Durgesh Kumar Ojha, Monica Subashini
Abstract:
The proposed method is to study and analyze Electrocardiograph (ECG) waveform to detect abnormalities present with reference to P, Q, R and S peaks. The first phase includes the acquisition of real time ECG data. In the next phase, generation of signals followed by pre-processing. Thirdly, the procured ECG signal is subjected to feature extraction. The extracted features detect abnormal peaks present in the waveform Thus the normal and abnormal ECG signal could be differentiated based on the features extracted. The work is implemented in the most familiar multipurpose tool, MATLAB. This software efficiently uses algorithms and techniques for detection of any abnormalities present in the ECG signal. Proper utilization of MATLAB functions (both built-in and user defined) can lead us to work with ECG signals for processing and analysis in real time applications. The simulation would help in improving the accuracy and the hardware could be built conveniently.
Keywords: ECG Waveform, Peak Detection, Arrhythmia, Matlab.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120098727 Problems of Boolean Reasoning Based Biclustering Parallelization
Authors: Marcin Michalak
Abstract:
Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.Keywords: Boolean reasoning, biclustering, parallelization, prime implicant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6008726 Automated Fact-Checking By Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state of the art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study presents a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive and authoritative data; 2) developing a search function to automatically select relevant, new and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that: 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graph in Wikidata to dynamically augment the representations of claims and references without introducing too much noises; II) exploring semantic relations in claims and references to further enhance fact-checking.
Keywords: Fact checking, claim verification, Deep Learning, Natural Language Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 898725 Multilevel Classifiers in Recognition of Handwritten Kannada Numerals
Authors: Dinesh Acharya U., N. V. Subba Reddy, Krishnamoorthi Makkithaya
Abstract:
The recognition of handwritten numeral is an important area of research for its applications in post office, banks and other organizations. This paper presents automatic recognition of handwritten Kannada numerals based on structural features. Five different types of features, namely, profile based 10-segment string, water reservoir; vertical and horizontal strokes, end points and average boundary length from the minimal bounding box are used in the recognition of numeral. The effect of each feature and their combination in the numeral classification is analyzed using nearest neighbor classifiers. It is common to combine multiple categories of features into a single feature vector for the classification. Instead, separate classifiers can be used to classify based on each visual feature individually and the final classification can be obtained based on the combination of separate base classification results. One popular approach is to combine the classifier results into a feature vector and leaving the decision to next level classifier. This method is extended to extract a better information, possibility distribution, from the base classifiers in resolving the conflicts among the classification results. Here, we use fuzzy k Nearest Neighbor (fuzzy k-NN) as base classifier for individual feature sets, the results of which together forms the feature vector for the final k Nearest Neighbor (k-NN) classifier. Testing is done, using different features, individually and in combination, on a database containing 1600 samples of different numerals and the results are compared with the results of different existing methods.Keywords: Fuzzy k Nearest Neighbor, Multiple Classifiers, Numeral Recognition, Structural features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17528724 Computer-based Alarm Processing and Presentation Methods in Nuclear Power Plants
Authors: Jung-Woon Lee, Jung-Taek Kim, Jae-Chang Park, In-Koo Hwang, Sung-Pil Lyu
Abstract:
Computerized alarm systems have been applied increasingly to nuclear power plants. For existing plants, an add-on computer alarm system is often installed to the control rooms. Alarm avalanches during the plant transients are major problems with the alarm systems in nuclear power plants. Computerized alarm systems can process alarms to reduce the number of alarms during the plant transients. This paper describes various alarm processing methods, an alarm cause tracking function, and various alarm presentation schemes to show alarm information to the operators effectively which are considered during the development of several computerized alarm systems for Korean nuclear power plants and are found to be helpful to the operators.Keywords: Alarm processing, Alarm presentation, Alarm causetracking, Alarm logic diagram computerization, Alarm patternrecognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23758723 A Study on a Research and Development Cost-Estimation Model in Korea
Authors: Babakina Alexandra, Yong Soo Kim
Abstract:
In this study, we analyzed the factors that affect research funds using linear regression analysis to increase the effectiveness of investments in national research projects. We collected 7,916 items of data on research projects that were in the process of being finished or were completed between 2010 and 2011. Data pre-processing and visualization were performed to derive statistically significant results. We identified factors that affected funding using analysis of fit distributions and estimated increasing or decreasing tendencies based on these factors.
Keywords: R&D funding, Cost estimation, Linear regression, Preliminary feasibility study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22508722 Energy Recovery Potential from Food Waste and Yard Waste in New York and Montréal
Abstract:
Landfilling of organic waste is still the predominant waste management method in the USA and Canada. Strategic plans for waste diversion from landfills are needed to increase material recovery and energy generation from waste. In this paper, we carried out a statistical survey on waste flow in the two cities New York and Montréal and estimated the energy recovery potential for each case. Data collection and analysis of the organic waste (food waste, yard waste, etc.), paper and cardboard, metal, glass, plastic, carton, textile, electronic products and other materials were done based on the reports published by the Department of Sanitation in New York and Service de l'Environnement in Montréal. In order to calculate the gas generation potential of organic waste, Buswell equation was used in which the molar mass of the elements was calculated based on their atomic weight and the amount of organic waste in New York and Montréal. Also, the higher and lower calorific value of the organic waste (solid base) and biogas (gas base) were calculated. According to the results, only 19% (598 kt) and 45% (415 kt) of New York and Montréal waste were diverted from landfills in 2017, respectively. The biogas generation potential of the generated food waste and yard waste amounted to 631 million m3 in New York and 173 million m3 in Montréal. The higher and lower calorific value of food waste were 3482 and 2792 GWh in New York and 441 and 354 GWh in Montréal, respectively. In case of yard waste, they were 816 and 681 GWh in New York and 636 and 531 GWh in Montréal, respectively. Considering the higher calorific value, this amount would mean a contribution of around 2.5% energy in these cities.
Keywords: Energy recovery, organic waste, urban energy modelling with INSEL, waste flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9468721 Surveillance of Super-Extended Objects: Bimodal Approach
Authors: Andrey V. Timofeev, Dmitry Egorov
Abstract:
This paper describes an effective solution to the task of a remote monitoring of super-extended objects (oil and gas pipeline, railways, national frontier). The suggested solution is based on the principle of simultaneously monitoring of seismoacoustic and optical/infrared physical fields. The principle of simultaneous monitoring of those fields is not new but in contrast to the known solutions the suggested approach allows to control super-extended objects with very limited operational costs. So-called C-OTDR (Coherent Optical Time Domain Reflectometer) systems are used to monitor the seismoacoustic field. Far-CCTV systems are used to monitor the optical/infrared field. A simultaneous data processing provided by both systems allows effectively detecting and classifying target activities, which appear in the monitored objects vicinity. The results of practical usage had shown high effectiveness of the suggested approach.
Keywords: Bimodal processing, C-OTDR monitoring system, LPboost, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20698720 Spatial Audio Player Using Musical Genre Classification
Authors: Jun-Yong Lee, Hyoung-Gook Kim
Abstract:
In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.
Keywords: Automatic equalization, genre classification, music segment detection, spatial audio processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16258719 DSLEP (Data Structure Learning Platform to Aid in Higher Education IT Courses)
Authors: Estevan B. Costa, Armando M. Toda, Marcell A. A. Mesquita, Jacques D. Brancher
Abstract:
The advances in technology in the last five years allowed an improvement in the educational area, as the increasing in the development of educational software. One of the techniques that emerged in this lapse is called Gamification, which is the utilization of video game mechanics outside its bounds. Recent studies involving this technique provided positive results in the application of these concepts in many areas as marketing, health and education. In the last area there are studies that covers from elementary to higher education, with many variations to adequate to the educators methodologies. Among higher education, focusing on IT courses, data structures are an important subject taught in many of these courses, as they are base for many systems. Based on the exposed this paper exposes the development of an interactive web learning environment, called DSLEP (Data Structure Learning Platform), to aid students in higher education IT courses. The system includes basic concepts seen on this subject such as stacks, queues, lists, arrays, trees and was implemented to ease the insertion of new structures. It was also implemented with gamification concepts, such as points, levels, and leader boards, to engage students in the search for knowledge and stimulate self-learning.
Keywords: Gamification, Interactive learning environment, Data structures, e-learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24408718 Detection of New Attacks on Ubiquitous Services in Cloud Computing and Countermeasures
Authors: L. Sellami, D. Idoughi, P. F. Tiako
Abstract:
Cloud computing provides infrastructure to the enterprise through the Internet allowing access to cloud services at anytime and anywhere. This pervasive aspect of the services, the distributed nature of data and the wide use of information make cloud computing vulnerable to intrusions that violate the security of the cloud. This requires the use of security mechanisms to detect malicious behavior in network communications and hosts such as intrusion detection systems (IDS). In this article, we focus on the detection of intrusion into the cloud sing IDSs. We base ourselves on client authentication in the computing cloud. This technique allows to detect the abnormal use of ubiquitous service and prevents the intrusion of cloud computing. This is an approach based on client authentication data. Our IDS provides intrusion detection inside and outside cloud computing network. It is a double protection approach: The security user node and the global security cloud computing.
Keywords: Cloud computing, intrusion detection system, privacy, trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11018717 AI-Driven Cloud Security: Proactive Defense Against Evolving Cyber Threats
Authors: Ashly Joseph
Abstract:
Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the data, reputation, and financial stability of enterprises. Conventional security methods can become inadequate when confronted with ever intricate and dynamic threats. Artificial Intelligence (AI) technologies possess the capacity to significantly transform cloud security through their ability to promptly identify and thwart assaults, adjust to emerging risks, and offer intelligent perspectives for proactive security actions. The objective of this research study is to investigate the utilization of AI technologies in augmenting the security measures within cloud computing systems. This paper aims to offer significant insights and recommendations for businesses seeking to protect their cloud-based assets by analyzing the present state of cloud security, the capabilities of AI, and the possible advantages and obstacles associated with using AI into cloud security policies.
Keywords: Machine Learning, Natural Learning Processing, Denial-of-Service attacks, Sentiment Analysis, Cloud computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098716 Spread Spectrum Image Watermarking for Secured Multimedia Data Communication
Authors: Tirtha S. Das, Ayan K. Sau, Subir K. Sarkar
Abstract:
Digital watermarking is a way to provide the facility of secure multimedia data communication besides its copyright protection approach. The Spread Spectrum modulation principle is widely used in digital watermarking to satisfy the robustness of multimedia signals against various signal-processing operations. Several SS watermarking algorithms have been proposed for multimedia signals but very few works have discussed on the issues responsible for secure data communication and its robustness improvement. The current paper has critically analyzed few such factors namely properties of spreading codes, proper signal decomposition suitable for data embedding, security provided by the key, successive bit cancellation method applied at decoder which have greater impact on the detection reliability, secure communication of significant signal under camouflage of insignificant signals etc. Based on the analysis, robust SS watermarking scheme for secure data communication is proposed in wavelet domain and improvement in secure communication and robustness performance is reported through experimental results. The reported result also shows improvement in visual and statistical invisibility of the hidden data.
Keywords: Spread spectrum modulation, spreading code, signaldecomposition, security, successive bit cancellation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27828715 Clustering Based Formulation for Short Term Load Forecasting
Authors: Ajay Shekhar Pandey, D. Singh, S. K. Sinha
Abstract:
A clustering based technique has been developed and implemented for Short Term Load Forecasting, in this article. Formulation has been done using Mean Absolute Percentage Error (MAPE) as an objective function. Data Matrix and cluster size are optimization variables. Model designed, uses two temperature variables. This is compared with six input Radial Basis Function Neural Network (RBFNN) and Fuzzy Inference Neural Network (FINN) for the data of the same system, for same time period. The fuzzy inference system has the network structure and the training procedure of a neural network which initially creates a rule base from existing historical load data. It is observed that the proposed clustering based model is giving better forecasting accuracy as compared to the other two methods. Test results also indicate that the RBFNN can forecast future loads with accuracy comparable to that of proposed method, where as the training time required in the case of FINN is much less.
Keywords: Load forecasting, clustering, fuzzy inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16298714 Urban Water Management at the Time of Natural Disaster
Authors: H. Shahabi
Abstract:
since in natural accidents, facilities that relate to this vita element are underground so, it is difficult to find quickly some right, exact and definite information about water utilities. There fore, this article has done operationally in Boukan city in Western Azarbaijan of Iran and it tries to represent operation and capabilities of Geographical Information system (GIS) in urban water management at the time of natural accidents. Structure of this article is that firstly it has established a comprehensive data base related to water utilities by collecting, entering, saving and data management, then by modeling water utilities we have practically considered its operational aspects related to water utility problems in urban regions.
Keywords: Natural Disaster, Geographical Information system (GIS), Modeling and network analysis, Boukan city in Western Azerbaijan, Iran
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14088713 An Inductive Coupling Based CMOS Wireless Powering Link for Implantable Biomedical Applications
Authors: Lei Yao, Jia Hao Cheong, Rui-Feng Xue, Minkyu Je
Abstract:
A closed-loop controlled wireless power transmission circuit block for implantable biomedical applications is described in this paper. The circuit consists of one front-end rectifier, power management sub-block including bandgap reference and low drop-out regulators (LDOs) as well as transmission power detection / feedback circuits. Simulation result shows that the front-end rectifier achieves 80% power efficiency with 750-mV single-end peak-to-peak input voltage and 1.28-V output voltage under load current of 4 mA. The power management block can supply 1.8mA average load current under 1V consuming only 12μW power, which is equivalent to 99.3% power efficiency. The wireless power transmission block described in this paper achieves a maximum power efficiency of 80%. The wireless power transmission circuit block is designed and implemented using UMC 65-nm CMOS/RF process. It occupies 1 mm × 1.2 mm silicon area.
Keywords: Implantable biomedical devices, wireless power transfer, LDO, rectifier, closed-loop power control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2286