Search results for: spatial audio processing
4145 The Fundamental Research and Industrial Application on CO₂+O₂ in-situ Leaching Process in China
Authors: Lixin Zhao, Genmao Zhou
Abstract:
Traditional acid in-situ leaching (ISL) is not suitable for the sandstone uranium deposit with low permeability and high content of carbonate minerals, because of the blocking of calcium sulfate precipitates. Another factor influences the uranium acid in-situ leaching is that the pyrite in ore rocks will react with oxidation reagent and produce lots of sulfate ions which may speed up the precipitation process of calcium sulphate and consume lots of oxidation reagent. Due to the advantages such as less chemical reagent consumption and groundwater pollution, CO₂+O₂ in-situ leaching method has become one of the important research areas in uranium mining. China is the second country where CO₂+O₂ ISL has been adopted in industrial uranium production of the world. It is shown that the CO₂+O₂ ISL in China has been successfully developed. The reaction principle, technical process, well field design and drilling engineering, uranium-bearing solution processing, etc. have been fully studied. At current stage, several uranium mines use CO₂+O₂ ISL method to extract uranium from the ore-bearing aquifers. The industrial application and development potential of CO₂+O₂ ISL method in China are summarized. By using CO₂+O₂ neutral leaching technology, the problem of calcium carbonate and calcium sulfate precipitation have been solved during uranium mining. By reasonably regulating the amount of CO₂ and O₂, related ions and hydro-chemical conditions can be controlled within the limited extent for avoiding the occurrence of calcium sulfate and calcium carbonate precipitation. Based on this premise, the demand of CO₂+O₂ uranium leaching has been met to the maximum extent, which not only realizes the effective leaching of uranium, but also avoids the occurrence and precipitation of calcium carbonate and calcium sulfate, realizing the industrial development of the sandstone type uranium deposit.Keywords: CO₂+O₂ ISL, industrial production, well field layout, uranium processing
Procedia PDF Downloads 1764144 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study
Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle
Abstract:
The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area
Procedia PDF Downloads 1184143 Hydrogen Production from Solid Waste of Sago Processing Industries in Indonesia: Effect of Chemical and Biological Pretreatment
Authors: Pratikno Hidayat, Khamdan Cahyari
Abstract:
Hydrogen is the ultimate choice of energy carriers in future. It contents high energy density (42 kJ/g), emits only water vapor during combustion and has high energy conversion up to 50% in fuel cell application. One of the promising methods to produce hydrogen is from organic waste through dark fermentation method. It utilizes sugar-rich organic waste as substrate and hydrogen-producing microorganisms to generate the hydrogen. Solid waste of sago processing industries in Indonesia is one of the promising raw materials for both producing biofuel hydrogen and mitigating the environmental impact due to the waste disposal. This research was meant to investigate the effect of chemical and biological pretreatment i.e. acid treatment and mushroom cultivation toward lignocellulosic waste of these sago industries. Chemical pretreatment was conducted through exposing the waste into acid condition using sulfuric acid (H2SO4) (various molar i.e. 0.2, 0.3, and 0.4 M and various duration of exposure i.e. 30, 60 and 90 minutes). Meanwhile, biological treatment was conducted through utilization of the solid waste as growth media of mushroom (Oyster and Ling-zhi) for 3 months. Dark fermentation was conducted at pH 5.0, temperature 27℃ and atmospheric pressure. It was noticed that chemical and biological pretreatment could improve hydrogen yield with the highest yield at 3.8 ml/g VS (31%v H2). The hydrogen production was successfully performed to generate high percentage of hydrogen, although the yield was still low. This result indicated that the explosion of acid chemical and biological method might need to be extended to improve degradability of the solid waste. However, high percentage of hydrogen was resulted from proper pretreatment of residual sludge of biogas plant to generate hydrogen-producing inoculum.Keywords: hydrogen, sago waste, chemical, biological, dark fermentation, Indonesia
Procedia PDF Downloads 3664142 Content-Based Color Image Retrieval Based on the 2-D Histogram and Statistical Moments
Authors: El Asnaoui Khalid, Aksasse Brahim, Ouanan Mohammed
Abstract:
In this paper, we are interested in the problem of finding similar images in a large database. For this purpose we propose a new algorithm based on a combination of the 2-D histogram intersection in the HSV space and statistical moments. The proposed histogram is based on a 3x3 window and not only on the intensity of the pixel. This approach can overcome the drawback of the conventional 1-D histogram which is ignoring the spatial distribution of pixels in the image, while the statistical moments are used to escape the effects of the discretisation of the color space which is intrinsic to the use of histograms. We compare the performance of our new algorithm to various methods of the state of the art and we show that it has several advantages. It is fast, consumes little memory and requires no learning. To validate our results, we apply this algorithm to search for similar images in different image databases.Keywords: 2-D histogram, statistical moments, indexing, similarity distance, histograms intersection
Procedia PDF Downloads 4574141 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 1224140 Pharmaceutical Science and Development in Drug Research
Authors: Adegoke Yinka Adebayo
Abstract:
An understanding of the critical product attributes that impact on in vivo performance is key to the production of safe and effective medicines. Thus, a key driver for our research is the development of new basic science and technology underpinning the development of new pharmaceutical products. Research includes the structure and properties of drugs and excipients, biopharmaceutical characterisation, pharmaceutical processing and technology and formulation and analysis.Keywords: drug discovery, drug development, drug delivery
Procedia PDF Downloads 4944139 Energy Production with Closed Methods
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.Keywords: energy, heating, atmosphere, waste, gasification
Procedia PDF Downloads 2354138 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 1054137 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1234136 Human Behavior Modeling in Video Surveillance of Conference Halls
Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini
Abstract:
In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.Keywords: activity modeling, clustering, PLSA, video representation
Procedia PDF Downloads 3944135 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 1604134 Natural Gas Flow Optimization Using Pressure Profiling and Isolation Techniques
Authors: Syed Tahir Shah, Fazal Muhammad, Syed Kashif Shah, Maleeha Gul
Abstract:
In recent days, natural gas has become a relatively clean and quality source of energy, which is recovered from deep wells by expensive drilling activities. The recovered substance is purified by processing in multiple stages to remove the unwanted/containments like dust, dirt, crude oil and other particles. Mostly, gas utilities are concerned with essential objectives of quantity/quality of natural gas delivery, financial outcome and safe natural gas volumetric inventory in the transmission gas pipeline. Gas quantity and quality are primarily related to standards / advanced metering procedures in processing units/transmission systems, and the financial outcome is defined by purchasing and selling gas also the operational cost of the transmission pipeline. SNGPL (Sui Northern Gas Pipelines Limited) Pakistan has a wide range of diameters of natural gas transmission pipelines network of over 9125 km. This research results in answer a few of the issues in accuracy/metering procedures via multiple advanced gadgets for gas flow attributes after being utilized in the transmission system and research. The effects of good pressure management in transmission gas pipeline network in contemplation to boost the gas volume deposited in the existing network and finally curbing gas losses UFG (Unaccounted for gas) for financial benefits. Furthermore, depending on the results and their observation, it is directed to enhance the maximum allowable working/operating pressure (MAOP) of the system to 1235 PSIG from the current round about 900 PSIG, such that the capacity of the network could be entirely utilized. In gross, the results depict that the current model is very efficient and provides excellent results in the minimum possible time.Keywords: natural gas, pipeline network, UFG, transmission pack, AGA
Procedia PDF Downloads 954133 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 214132 Development of a Mobile APP for Establishing Thermal Sensation Maps using Citizen Participation
Authors: Jeong-Min Son, Jeong-Hee Eum, Jin-Kyu Min, Uk-Je Sung, Ju-Eun Kim
Abstract:
While various environmental problems are severe due to climate change, especially in cities where population and development are concentrated, urban thermal environment problems such as heat waves and tropical nights are particularly worsening. Accordingly, the Korean government provides basic data related to the urban thermal environment to support each local government in effectively establishing policies to cope with heat waves. However, the basic data related to the thermal environment provided by the government has limitations in establishing a regional thermal adaptation plan with a minimum unit of cities, counties, and districts. In addition, the urban heat environment perceived by people differs in each region and space. Therefore, it is necessary to prepare practical measures that can be used to establish regional-based policies for heat wave adaptation by identifying people’s heat perception in the entire city. This study aims to develop a mobile phone application (APP) to gather people’s thermal sensation information and create Korea’s first thermal map based on this information. In addition, through this APP, citizens directly propose thermal adaptation policies, and urban planners and policymakers accept citizens' opinions, so this study provides a tool to solve local thermal environment problems. To achieve this purpose, first, the composition and contents of the app were discussed by examining various existing apps and cases for citizen participation and collection of heat information. In addition, factors affecting human thermal comfort, such as spatial, meteorological, and demographic factors, were investigated to construct the APP system. Based on these results, the basic version of the APP was developed. Second, the living lab methodology was adopted to gather people’s heat perception using the developed app to conduct overall evaluation and feedback of people on the APP. The people participating in the living lab were selected as those living in Daegu Metropolitan City, which is located in South Korea and annually records high temperatures. The user interface was improved through the living lab to make the app easier to use and the thermal map was modified. This study expects to establish high-resolution thermal maps for effective policies and measures and to solve local thermal environmental problems using the APP. The collected information can be used to evaluate spatial, meteorological, and demographic characteristics that affect the perceived heat of citizens. In addition, it is expected that the research can be expanded by gathering thermal information perceived by citizens of foreign cities as well as other cities in South Korea through the APP developed in this study.Keywords: mobile application, living lab, thermal map, climate change adaptation
Procedia PDF Downloads 864131 Copywriting and the Creative Edge
Authors: Dandeswar Bisoyi, Preeti Yadav, Utpal Barua
Abstract:
This study address particular way that verbal information can affect the processing of positive and interesting qualities which help in making the brand attractive to the consumer. Also, it address the development of a communication strategy which is a very important part of the marketing plan we have to take into account many factors. Out of all the product strengths, the strategy has to outline one marked differential which will drive our brand. This is the fundamental base on which the entire creative strategy will be big idea-based.Keywords: copy writing, advertisement, marketing, branding, recall
Procedia PDF Downloads 5824130 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking
Authors: Esmeralda Hysenbelliu
Abstract:
The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.Keywords: improved quality of experience (QoE), OpenFlow SDN controller, IPTV service application, softwarization
Procedia PDF Downloads 1474129 Feasibility Study for Removing Atherosclerotic Plaque Using the Thermal Effects of a Planar Rectangular High Intensity Ultrasound Transducer
Authors: Christakis Damianou, Christos Christofi, Nicos Mylonas
Abstract:
The aim of this paper was to conduct a feasibility study using a flat rectangular (3x10 mm2) MRI compatible transducer operating at 5 MHz for destroying atherosclerotic plaque using the thermal effects of ultrasound in in vitro models. A parametric study was performed where the time needed to ablate the plaque was studied as a function of Spatial Average Temporal Average (SATA) intensity, and pulse duration. The time needed to ablate plaque is directly related to intensity, and pulse duration. The temperature measured close to the artery is above safe limits and therefore thermal ultrasound does not have a place in removing plaques in arteries.Keywords: ultrasound, atherosclerotic, plaque, pulse
Procedia PDF Downloads 2934128 Non-Homogeneous Layered Fiber Reinforced Concrete
Authors: Vitalijs Lusis, Andrejs Krasnikovs
Abstract:
Fiber reinforced concrete is important material for load bearing structural elements. Usually fibers are homogeneously distributed in a concrete body having arbitrary spatial orientations. At the same time, in many situations, fiber concrete with oriented fibers is more optimal. Is obvious, that is possible to create constructions with oriented short fibers in them, in different ways. Present research is devoted to one of such approaches- fiber reinforced concrete prisms having dimensions 100 mm×100 mm×400 mm with layers of non-homogeneously distributed fibers inside them were fabricated. Simultaneously prisms with homogeneously dispersed fibers were produced for reference as well. Prisms were tested under four point bending conditions. During the tests vertical deflection at the center of every prism and crack opening were measured (using linear displacements transducers in real timescale). Prediction results were discussed.Keywords: fiber reinforced concrete, 4-point bending, steel fiber, construction engineering
Procedia PDF Downloads 3674127 Strongly Disordered Conductors and Insulators in Holography
Authors: Matthew Stephenson
Abstract:
We study the electrical conductivity of strongly disordered, strongly coupled quantum field theories, holographically dual to non-perturbatively disordered uncharged black holes. The computation reduces to solving a diffusive hydrostatic equation for an emergent horizon fluid. We demonstrate that a large class of theories in two spatial dimensions have a universal conductivity independent of disorder strength, and rigorously rule out disorder-driven conductor-insulator transitions in many theories. We present a (fine-tuned) axion-dilaton bulk theory which realizes the conductor-insulator transition, interpreted as a classical percolation transition in the horizon fluid. We address aspects of strongly disordered holography that can and cannot be addressed via mean-field modeling, such as massive gravity.Keywords: theoretical physics, black holes, holography, high energy
Procedia PDF Downloads 1784126 Hot Deformability of Si-Steel Strips Containing Al
Authors: Mohamed Yousef, Magdy Samuel, Maha El-Meligy, Taher El-Bitar
Abstract:
The present work is dealing with 2% Si-steel alloy. The alloy contains 0.05% C as well as 0.85% Al. The alloy under investigation would be used for electrical transformation purposes. A heating (expansion) - cooling (contraction) dilation investigation was executed to detect the a, a+g, and g transformation temperatures at the inflection points of the dilation curve. On heating, primary a was detected at a temperature range between room temperature and 687 oC. The domain of a+g was detected in the range between 687 oC and 746 oC. g phase exists in the closed g region at the range between 746 oC and 1043 oC. The domain of a phase appears again at a temperature range between 1043 and 1105 oC, and followed by secondary a at temperature higher than 1105 oC. A physical simulation of thermo-mechanical processing on the as-cast alloy was carried out. The simulation process took into consideration the hot flat rolling pilot plant parameters. The process was executed on the thermo-mechanical simulator (Gleeble 3500). The process was designed to include seven consecutive passes. The 1st pass represents the roughing stage, while the remaining six passes represent finish rolling stage. The whole process was executed at the temperature range from 1100 oC to 900 oC. The amount of strain starts with 23.5% at the roughing pass and decreases continuously to reach 7.5 % at the last finishing pass. The flow curve of the alloy can be abstracted from the stress-strain curves representing simulated passes. It shows alloy hardening from a pass to the other up to pass no. 6, as a result of decreasing the deformation temperature and increasing of cumulative strain. After pass no. 6, the deformation process enhances the dynamic recrystallization phenomena to appear, where the z-parameter would be high.Keywords: si- steel, hot deformability, critical transformation temperature, physical simulation, thermo-mechanical processing, flow curve, dynamic softening.
Procedia PDF Downloads 2454125 Heavy Metal Contents in Vegetable Oils of Kazakhstan Origin and Life Risk Assessment
Authors: A. E. Mukhametov, M. T. Yerbulekova, D. R. Dautkanova, G. A. Tuyakova, G. Aitkhozhayeva
Abstract:
The accumulation of heavy metals in food is a constant problem in many parts of the world. Vegetable oils are widely used, both for cooking and for processing in the food industry, meeting the main dietary requirements. One of the main chemical pollutants, heavy metals, is usually found in vegetable oils. These chemical pollutants are carcinogenic, teratogenic and immunotoxic, harmful to consumption and have a negative effect on human health even in trace amounts. Residues of these substances can easily accumulate in vegetable oil during cultivation, processing and storage. In this article, the content of the concentration of heavy metal ions in vegetable oils of Kazakhstan production is studied: sunflower, rapeseed, safflower and linseed oil. Heavy metals: arsenic, cadmium, lead and nickel, were determined in three repetitions by the method of flame atomic absorption. Analysis of vegetable oil samples revealed that the largest lead contamination (Pb) was determined to be 0.065 mg/kg in linseed oil. The content of cadmium (Cd) in the largest amount of 0.009 mg/kg was found in safflower oil. Arsenic (As) content was determined in rapeseed and safflower oils at 0.003 mg/kg, and arsenic (As) was not detected in linseed and sunflower oil. The nickel (Ni) content in the largest amount of 0.433 mg/kg was in linseed oil. The heavy metal contents in the test samples complied with the requirements of regulatory documents for vegetable oils. An assessment of the health risk of vegetable oils with a daily consumption of 36 g per day shows that all samples of vegetable oils produced in Kazakhstan are safe for consumption. But further monitoring is needed, since all these metals are toxic and their harmful effects become apparent only after several years of exposure.Keywords: vegetable oil, sunflower oil, linseed oil, safflower oil, toxic metals, food safety, rape oil
Procedia PDF Downloads 1334124 Information Extraction for Short-Answer Question for the University of the Cordilleras
Authors: Thelma Palaoag, Melanie Basa, Jezreel Mark Panilo
Abstract:
Checking short-answer questions and essays, whether it may be paper or electronic in form, is a tiring and tedious task for teachers. Evaluating a student’s output require wide array of domains. Scoring the work is often a critical task. Several attempts in the past few years to create an automated writing assessment software but only have received negative results from teachers and students alike due to unreliability in scoring, does not provide feedback and others. The study aims to create an application that will be able to check short-answer questions which incorporate information extraction. Information extraction is a subfield of Natural Language Processing (NLP) where a chunk of text (technically known as unstructured text) is being broken down to gather necessary bits of data and/or keywords (structured text) to be further analyzed or rather be utilized by query tools. The proposed system shall be able to extract keywords or phrases from the individual’s answers to match it into a corpora of words (as defined by the instructor), which shall be the basis of evaluation of the individual’s answer. The proposed system shall also enable the teacher to provide feedback and re-evaluate the output of the student for some writing elements in which the computer cannot fully evaluate such as creativity and logic. Teachers can formulate, design, and check short answer questions efficiently by defining keywords or phrases as parameters by assigning weights for checking answers. With the proposed system, teacher’s time in checking and evaluating students output shall be lessened, thus, making the teacher more productive and easier.Keywords: information extraction, short-answer question, natural language processing, application
Procedia PDF Downloads 4284123 The Use of Political Savviness in Dealing with Workplace Ostracism: A Social Information Processing Perspective
Authors: Amy Y. Wang, Eko L. Yi
Abstract:
Can vicarious experiences of workplace ostracism affect employees’ willingness to voice? Given the increasingly interdependent nature of the modern workplace in which employees rely on social interactions to fulfill organizational goals, workplace ostracism –the extent to which an individual perceives that he or she is ignored or excluded by others in the workplace– has garnered significant interest from scholars and practitioners alike. Extending beyond conventional studies that largely focus on the perspectives and outcomes of ostracized targets, we address the indirect effects of workplace ostracism on third-party employees embedded in the same social context. Using a social information processing approach, we propose that the ostracism of coworkers acts as political information that influences third-party employees in their decisions to engage in risky and discretionary behaviors such as employee voice. To make sense of and to navigate through experiences of workplace ostracism, we posit that both political understanding and political skill allow third party employees to minimize the risks and uncertainty of voicing. This conceptual model was tested by a study involving 154 supervisor-subordinate dyads of a publicly listed bio-technology firm located in Mainland China. Each supervisor and their direct subordinates composed of a work team; each team had a minimum of two subordinates and a maximum of four subordinates. Human resources used the master list to distribute the ID coded questionnaires to the matching names. All studied constructs were measured using existing scales proved effective in previous literature. Hypotheses were tested using Confirmatory Factor Analysis and Hierarchal Multiple Regression. All three hypotheses were supported which showed that employees were less likely to engage in voice behaviors when their coworkers reported having experienced ostracism in the workplace. Results also showed a significant three-way interaction between political understanding and political skill on the relationship between coworkers’ ostracism and employee voice, indicating that political savviness is a valuable resource in mitigating ostracism’s negative and indirect effects. Our results illustrated that an employee’s coworkers being ostracized indeed adversely impacted his or her own voice behavior. However, not all individuals reacted passively to the social context; rather, we found that politically savvy individuals – possessing both political understanding and political skill – and their voice behaviors were less impacted by ostracism in their work environment. At the same time, we found that having only political understanding or only political skill was significantly less effective in mitigating ostracism’s negative effects, suggesting a necessary duality of political knowledge and political skill in combatting ostracism. Organizational implications, recommendations, and future research ideas are also discussed.Keywords: employee voice, organizational politics, social information processing, workplace ostracism
Procedia PDF Downloads 1404122 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 2694121 Advancing Women's Participation in SIDS' Renewable Energy Sector: A Multicriteria Evaluation Framework
Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos
Abstract:
Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.Keywords: gender, women, spatial analysis, renewable energy, access
Procedia PDF Downloads 694120 A Multicriteria Evaluation Framework for Enhancing Women's Participation in SIDS Renewable Energy Sector
Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos
Abstract:
Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.Keywords: gender, women, spatial analysis, renewable energy, access
Procedia PDF Downloads 834119 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution
Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko
Abstract:
Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking
Procedia PDF Downloads 734118 Tribological Properties of Non-Stick Coatings Used in Bread Baking Process
Authors: Maurice Brogly, Edwige Privas, Rajesh K. Gajendran, Sophie Bistac
Abstract:
Anti-sticky coatings based on perfluoroalkoxy (PFA) coatings are widely used in food processing industry especially for bread making. Their tribological performance, such as low friction coefficient, low surface energy and high heat resistance, make them an appropriate choice for anti-sticky coating application in moulds for food processing industry. This study is dedicated to evidence the transfer of contaminants from the coating due to wear and thermal ageing of the mould. The risk of contamination is induced by the damage of the coating by bread crust during the demoulding stage. The study focuses on the wear resistance and potential transfer of perfluorinated polymer from the anti-sticky coating. Friction between perfluorinated coating and bread crust is modeled by a tribological pin-on-disc test. The cellular nature of the bread crust is modeled by a polymer foam. FTIR analysis of the polymer foam after friction allow the evaluation of the transfer from the perfluorinated coating to polymer foam. Influence of thermal ageing on the physical, chemical and wear properties of the coating are also investigated. FTIR spectroscopic results show that the increase of PFA transfer onto the foam counterface is associated to the decrease of the friction coefficient. Increasing lubrication by film transfer results in the decrease of the friction coefficient. Moreover increasing the friction test parameters conditions (load, speed and sliding distance) also increase the film transfer onto the counterface. Thermal ageing increases the hydrophobic character of the PFA coating and thus also decreases the friction coefficient.Keywords: fluorobased polymer coatings, FTIR spectroscopy, non-stick food moulds, wear and friction
Procedia PDF Downloads 3314117 Bag of Words Representation Based on Weighting Useful Visual Words
Authors: Fatma Abdedayem
Abstract:
The most effective and efficient methods in image categorization are almost based on bag-of-words (BOW) which presents image by a histogram of occurrence of visual words. In this paper, we propose a novel extension to this method. Firstly, we extract features in multi-scales by applying a color local descriptor named opponent-SIFT. Secondly, in order to represent image we use Spatial Pyramid Representation (SPR) and an extension to the BOW method which based on weighting visual words. Typically, the visual words are weighted during histogram assignment by computing the ratio of their occurrences in the image to the occurrences in the background. Finally, according to classical BOW retrieval framework, only a few words of the vocabulary is useful for image representation. Therefore, we select the useful weighted visual words that respect the threshold value. Experimentally, the algorithm is tested by using different image classes of PASCAL VOC 2007 and is compared against the classical bag-of-visual-words algorithm.Keywords: BOW, useful visual words, weighted visual words, bag of visual words
Procedia PDF Downloads 4364116 Development and Implementation of Curvature Dependent Force Correction Algorithm for the Planning of Forced Controlled Robotic Grinding
Authors: Aiman Alshare, Sahar Qaadan
Abstract:
A curvature dependent force correction algorithm for planning force controlled grinding process with off-line programming flexibility is designed for ABB industrial robot, in order to avoid the manual interface during the process. The machining path utilizes a spline curve fit that is constructed from the CAD data of the workpiece. The fitted spline has a continuity of the second order to assure path smoothness. The implemented algorithm computes uniform forces normal to the grinding surface of the workpiece, by constructing a curvature path in the spatial coordinates using the spline method.Keywords: ABB industrial robot, grinding process, offline programming, CAD data extraction, force correction algorithm
Procedia PDF Downloads 362