Search results for: technical trading signal
1159 Current Approach in Biodosimetry: Electrochemical Detection of DNA Damage
Authors: Marcela Jelicova, Anna Lierova, Zuzana Sinkorova, Radovan Metelka
Abstract:
At present, electrochemical methods are used in various research fields, especially for analysis of biological molecules. The fact offers the possibility of using the detection of oxidative damage induced indirectly by γ rays in DNA in biodosimentry. The main goal of our study is to optimize the detection of 8-hydroxyguanine by differential pulse voltammetry. The level of this stable and specific indicator of DNA damage could be determined in DNA isolated from peripheral blood lymphocytes, plasma or urine of irradiated individuals. Screen-printed carbon electrodes modified with carboxy-functionalized multi-walled carbon nanotubes were utilized for highly sensitive electrochemical detection of 8-hydroxyguanine. Electrochemical oxidation of 8-hydroxoguanine monitored by differential pulse voltammetry was found pH-dependent and the most intensive signal was recorded at pH 7. After recalculating the current density, several times higher sensitivity was attained in comparison with already published results, which were obtained using screen-printed carbon electrodes with unmodified carbon ink. Subsequently, the modified electrochemical technique was used for the detection of 8-hydroxoguanine in calf thymus DNA samples irradiated by 60Co gamma source in the dose range from 0.5 to 20 Gy using by various types of sample pretreatment and measurement conditions. This method could serve for fast retrospective quantification of absorbed dose in cases of accidental exposure to ionizing radiation and may play an important role in biodosimetry.Keywords: biodosimetry, electrochemical detection, voltametry, 8-hydroxyguanine
Procedia PDF Downloads 2751158 The Pedagogical Integration of Digital Technologies in Initial Teacher Training
Authors: Vânia Graça, Paula Quadros-Flores, Altina Ramos
Abstract:
The use of Digital Technologies in teaching and learning processes is currently a reality, namely in initial teacher training. This study aims at knowing the digital reality of students in initial teacher training in order to improve training in the educational use of ICT and to promote digital technology integration strategies in an educational context. It is part of the IFITIC Project "Innovate with ICT in Initial Teacher Training to Promote Methodological Renewal in Pre-school Education and in the 1st and 2nd Basic Education Cycle" which involves the School of Education, Polytechnic of Porto and Institute of Education, University of Minho. The Project aims at rethinking educational practice with ICT in the initial training of future teachers in order to promote methodological innovation in Pre-school Education and in the 1st and 2nd Cycles of Basic Education. A qualitative methodology was used, in which a questionnaire survey was applied to teachers in initial training. For data analysis, the techniques of content analysis with the support of NVivo software were used. The results point to the following aspects: a) future teachers recognize that they have more technical knowledge about ICT than pedagogical knowledge. This result makes sense if we consider the objective of Basic Education, so that the gaps can be filled in the Master's Course by students who wish to follow the teaching; b) the respondents are aware that the integration of digital resources contributes positively to students' learning and to the life of children and young people, which also promotes preparation in life; c) to be a teacher in the digital age there is a need for the development of digital literacy, lifelong learning and the adoption of new ways of teaching how to learn. Thus, this study aims to contribute to a reflection on the teaching profession in the digital age.Keywords: digital technologies, initial teacher training, pedagogical use of ICT, skills
Procedia PDF Downloads 1231157 How Cyber Insurers and Managed Security Companies Influence the Content and Meaning of Privacy Law and Cybersecurity Compliance
Authors: Shauhin Talesh
Abstract:
Cyber risks--loss exposure associated with the use of electronic equipment, computers, information technology, and virtual reality--are among the biggest threats facing businesses and consumers. Despite these threats, private organizations are not significantly changing their behavior in response. Although many organizations do have formal cybersecurity policies in place, the majority believe they are insufficiently prepared for cybersecurity incidences, and have not conducted proper risk assessments or invested necessary training and resources to protect consumers’ electronic information. Drawing on empirical observations over the past 5 years, this article explains why insurers who manage cybersecurity and privacy law compliance among organizations have not been more successful in curtailing breaches. The analysis draws on Talesh's “new institutional theory of insurance,” which explains how insurers shape the content and meaning of law among organizations that purchase insurance. In response to vague and fragmented privacy laws and a lack of strong government oversight, insurers offer cyber insurance and a series of risk-management services to their customers. These services convey legitimacy to the public and to the insureds but fall short of improving the robustness of organizations, rendering them largely symbolic. Cyber insurers and managed security companies have flooded the market with high-level technical tools that they claim mitigate risk, but all they've really accomplished is to institutionalize a norm that policyholders need these tools to avoid cybersecurity incidents. Federal and state regulators and industry-based rating agencies have deferred to cyber insurers without evidence that these tools actually improve security.Keywords: regulation, compliance, insurance, cybersecurity, privacy law, organizations, risk management
Procedia PDF Downloads 111156 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)
Authors: Faisal Alsaaq
Abstract:
Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.Keywords: hydrography, GNSS, datum, tide gauge
Procedia PDF Downloads 2661155 Motivating Factors of Mobile Device Applications toward Learning
Authors: Yen-Mei Lee
Abstract:
Mobile learning (m-learning) has been applied in the education field not only because it is an alternative to web-based learning but also it possesses the ‘anytime, anywhere’ learning features. However, most studies focus on the technology-related issue, such as usability and functionality instead of addressing m-learning from the motivational perspective. Accordingly, the main purpose of the current paper is to integrate critical factors from different motivational theories and related findings to have a better understand the catalysts of an individual’s learning motivation toward m-learning. The main research question for this study is stated as follows: based on different motivational perspectives, what factors of applying mobile devices as medium can facilitate people’s learning motivations? Self-Determination Theory (SDT), Uses and Gratification Theory (UGT), Malone and Lepper’s taxonomy of intrinsic motivation theory, and different types of motivation concepts were discussed in the current paper. In line with the review of relevant studies, three motivating factors with five essential elements are proposed. The first key factor is autonomy. Learning on one’s own path and applying personalized format are two critical elements involved in the factor of autonomy. The second key factor is to apply a build-in instant feedback system during m-learning. The third factor is creating an interaction system, including communication and collaboration spaces. These three factors can enhance people’s learning motivations when applying mobile devices as medium toward learning. To sum up, in the currently proposed paper, with different motivational perspectives to discuss the m-learning is different from previous studies which are simply focused on the technical or functional design. Supported by different motivation theories, researchers can clearly understand how the mobile devices influence people’s leaning motivation. Moreover, instructional designers and educators can base on the proposed factors to build up their unique and efficient m-learning environments.Keywords: autonomy, learning motivation, mobile learning (m-learning), motivational perspective
Procedia PDF Downloads 1831154 Magnetic Resonance Imaging in Children with Brain Tumors
Authors: J. R. Ashrapov, G. A. Alihodzhaeva, D. E. Abdullaev, N. R. Kadirbekov
Abstract:
Diagnosis of brain tumors is one of the challenges, as several central nervous system diseases run the same symptoms. Modern diagnostic techniques such as CT, MRI helps to significantly improve the surgery in the operating period, after surgery, after allowing time to identify postoperative complications in neurosurgery. Purpose: To study the MRI characteristics and localization of brain tumors in children and to detect the postoperative complications in the postoperative period. Materials and methods: A retrospective study of treatment of 62 children with brain tumors in age from 2 to 5 years was performed. Results of the review: MRI scan of the brain of the 62 patients 52 (83.8%) case revealed a brain tumor. Distribution on MRI of brain tumors found in 15 (24.1%) - glioblastomas, 21 (33.8%) - astrocytomas, 7 (11.2%) - medulloblastomas, 9 (14.5%) - a tumor origin (craniopharyngiomas, chordoma of the skull base). MRI revealed the following characteristic features: an additional sign of the heterogeneous MRI signal of hyper and hypointensive T1 and T2 modes with a different perifocal swelling degree with involvement in the process of brain vessels. The main objectives of postoperative MRI study are the identification of early or late postoperative complications, evaluation of radical surgery, the identification of the extended-growing tumor that (in terms of 3-4 weeks). MRI performed in the following cases: 1. Suspicion of a hematoma (3 days or more) 2. Suspicion continued tumor growth (in terms of 3-4 weeks). Conclusions: Magnetic resonance tomography is a highly informative method of diagnostics of brain tumors in children. MRI also helps to determine the effectiveness and tactics of treatment and the follow up in the postoperative period.Keywords: brain tumors, children, MRI, treatment
Procedia PDF Downloads 1461153 Cyber Security and Risk Assessment of the e-Banking Services
Authors: Aisha F. Bushager
Abstract:
Today we are more exposed than ever to cyber threats and attacks at personal, community, organizational, national, and international levels. More aspects of our lives are operating on computer networks simply because we are living in the fifth domain, which is called the Cyberspace. One of the most sensitive areas that are vulnerable to cyber threats and attacks is the Electronic Banking (e-Banking) area, where the banking sector is providing online banking services to its clients. To be able to obtain the clients trust and encourage them to practice e-Banking, also, to maintain the services provided by the banks and ensure safety, cyber security and risks control should be given a high priority in the e-banking area. The aim of the study is to carry out risk assessment on the e-banking services and determine the cyber threats, cyber attacks, and vulnerabilities that are facing the e-banking area specifically in the Kingdom of Bahrain. To collect relevant data, structured interviews were taken place with e-banking experts in different banks. Then, collected data where used as in input to the risk management framework provided by the National Institute of Standards and Technology (NIST), which was the model used in the study to assess the risks associated with e-banking services. The findings of the study showed that the cyber threats are commonly human errors, technical software or hardware failure, and hackers, on the other hand, the most common attacks facing the e-banking sector were phishing, malware attacks, and denial-of-service. The risks associated with the e-banking services were around the moderate level, however, more controls and countermeasures must be applied to maintain the moderate level of risks. The results of the study will help banks discover their vulnerabilities and maintain their online services, in addition, it will enhance the cyber security and contribute to the management and control of risks that are facing the e-banking sector.Keywords: cyber security, e-banking, risk assessment, threats identification
Procedia PDF Downloads 3521152 A Practical and Theoretical Study on the Electromotor Bearing Defect Detection in a Wet Mill Using the Vibration Analysis Method and Defect Length Calculation in the Bearing
Authors: Mostafa Firoozabadi, Alireza Foroughi Nematollahi
Abstract:
Wet mills are one of the most important equipment in the mining industries and any defect occurrence in them can stop the production line and it can make some irrecoverable damages to the system. Electromotors are the significant parts of a mill and their monitoring is a necessary process to prevent unwanted defects. The purpose of this study is to investigate the Electromotor bearing defects, theoretically and practically, using the vibration analysis method. When a defect happens in a bearing, it can be transferred to the other parts of the equipment like inner ring, outer ring, balls, and the bearing cage. The electromotor defects source can be electrical or mechanical. Sometimes, the electrical and mechanical defect frequencies are modulated and the bearing defect detection becomes difficult. In this paper, to detect the electromotor bearing defects, the electrical and mechanical defect frequencies are extracted firstly. Then, by calculating the bearing defect frequencies, and the spectrum and time signal analysis, the bearing defects are detected. In addition, the obtained frequency determines that the bearing level in which the defect has happened and by comparing this level to the standards it determines the bearing remaining lifetime. Finally, the defect length is calculated by theoretical equations to demonstrate that there is no need to replace the bearing. The results of the proposed method, which has been implemented on the wet mills in the Golgohar mining and industrial company in Iran, show that this method is capable of detecting the electromotor bearing defects accurately and on time.Keywords: bearing defect length, defect frequency, electromotor defects, vibration analysis
Procedia PDF Downloads 5041151 Computational Intelligence and Machine Learning for Urban Drainage Infrastructure Asset Management
Authors: Thewodros K. Geberemariam
Abstract:
The rapid physical expansion of urbanization coupled with aging infrastructure presents a unique decision and management challenges for many big city municipalities. Cities must therefore upgrade and maintain the existing aging urban drainage infrastructure systems to keep up with the demands. Given the overall contribution of assets to municipal revenue and the importance of infrastructure to the success of a livable city, many municipalities are currently looking for a robust and smart urban drainage infrastructure asset management solution that combines management, financial, engineering and technical practices. This robust decision-making shall rely on sound, complete, current and relevant data that enables asset valuation, impairment testing, lifecycle modeling, and forecasting across the multiple asset portfolios. On this paper, predictive computational intelligence (CI) and multi-class machine learning (ML) coupled with online, offline, and historical record data that are collected from an array of multi-parameter sensors are used for the extraction of different operational and non-conforming patterns hidden in structured and unstructured data to determine and produce actionable insight on the current and future states of the network. This paper aims to improve the strategic decision-making process by identifying all possible alternatives; evaluate the risk of each alternative, and choose the alternative most likely to attain the required goal in a cost-effective manner using historical and near real-time urban drainage infrastructure data for urban drainage infrastructures assets that have previously not benefited from computational intelligence and machine learning advancements.Keywords: computational intelligence, machine learning, urban drainage infrastructure, machine learning, classification, prediction, asset management space
Procedia PDF Downloads 1531150 Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning
Authors: Dongju Kim, Youngjoo Suh, Hyojin Kim, Gyeongyeong Kim
Abstract:
Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers.Keywords: acoustic emission testing, carbon fiber reinforced polymer composite, one-dimensional convolutional neural network, smote data augmentation
Procedia PDF Downloads 951149 Fuel Cells and Offshore Wind Turbines Technology for Eco-Friendly Ports with a Case Study
Authors: Ibrahim Sadek Sedik Ibrahim, Mohamed M. Elgohary
Abstract:
Sea ports are considered one of the factors affecting the progress of economic globalization and the international trade; consequently, they are considered one of the sources involved in the deterioration of the maritime environment due to the excessive amount of exhaust gases emitted from their activities. The majority of sea ports depend on the national electric grid as a source of power for the domestic and ships’ electric demands. This paper discusses the possibility of shifting ports from relying on the national grid electricity to green power-based ports. Offshore wind turbines and hydrogenic PEM fuel cell units appear as two typical promising clean energy sources for ports. As a case study, the paper investigates the prospect of converting Alexandria Port in Egypt to be an eco-friendly port with the study of technical, logistic, and financial requirements. The results show that the fuel cell, followed by a combined system of wind turbines and fuel cells, is the best choice regarding electricity production unit cost by 0.101 and 0.107 $/kWh, respectively. Furthermore, using of fuel cells and offshore wind turbine as green power concept will achieving emissions reduction quantity of CO₂, NOx, and CO emissions by 80,441, 20.814, and 133.025 ton per year, respectively. Finally, the paper highlights the role that renewable energy can play when supplying Alexandria Port with green energy to lift the burden on the government in supporting the electricity, with a possibility of achieving a profit of 3.85% to 22.31% of the annual electricity cost compared with the international prices.Keywords: fuel cells, green ports, IMO, national electric grid, offshore wind turbines, port emissions, renewable energy
Procedia PDF Downloads 1421148 Drawing Building Blocks in Existing Neighborhoods: An Automated Pilot Tool for an Initial Approach Using GIS and Python
Authors: Konstantinos Pikos, Dimitrios Kaimaris
Abstract:
Although designing building blocks is a procedure used by many planners around the world, there isn’t an automated tool that will help planners and designers achieve their goals with lesser effort. The difficulty of the subject lies in the repeating process of manually drawing lines, while not only it is mandatory to maintain the desirable offset but to also achieve a lesser impact to the existing building stock. In this paper, using Geographical Information Systems (GIS) and the Python programming language, an automated tool integrated into ArcGIS PRO, is being presented. Despite its simplistic enviroment and the lack of specialized building legislation due to the complex state of the field, a planner who is aware of such technical information can use the tool to draw an initial approach of the final building blocks in an area with pre-existing buildings in an attempt to organize the usually sprawling suburbs of a city or any continuously developing area. The tool uses ESRI’s ArcPy library to handle the spatial data, while interactions with the user is made throught Tkinter. The main process consists of a modification of building edgescoordinates, using NumPy library, in an effort to draw the line of best fit, so the user can get the optimal results per block’s side. Finally, after the tool runs successfully, a table of primary planning information is shown, such as the area of the building block and its coverage rate. Regardless of the primary stage of the tool’s development, it is a solid base where potential planners with programming skills could invest, so they can make the tool adapt to their individual needs. An example of the entire procedure in a test area is provided, highlighting both the strengths and weaknesses of the final results.Keywords: arcPy, GIS, python, building blocks
Procedia PDF Downloads 1831147 Effective Factors on Farmers' Attitude toward Multifunctional Agriculture
Authors: Mohammad Sadegh Allahyari, Sorush Marzban
Abstract:
The main aim of this study was to investigate the factors affecting farmers' attitude of the Shanderman District in Masal (Guilan Province in the north of Iran), towards the concepts of multifunctional agriculture. The statistical population consisted of all 4908 in Shanderman.The sample of the present study consisted of 209 subjects who were selected from the total population using the Bartlett et al. Table. Questionnaire as the main tool of data collection was divided in two parts. The first part of questionnaire consisted of farmers' profiles regarding individual, technical-agronomic, economic and social characteristics. The second part included items to identify the farmers’ attitudes regarding different aspects of multifunctional agriculture. The validity of the questionnaire was assessed by professors and experts. Cronbach's alpha was used to determine the reliability (α= 0.844), which is considered an acceptable reliability value. Overall, the average scores of attitudes towards multifunctional agriculture show a positive tendency towards multifunctional agriculture, considering farmers' attitudes of the Shanderman district (SD = 0.53, M = 3.81). Results also highlight a significant difference between farmers' income source levels (F = 0.049) and agricultural literature review (F = 0.022) toward farmers' attitudes considering multifunctional agriculture (p < 0.05). Pearson correlations also indicated that there is a positive relationship between positive attitudes and family size (r = 0.154), farmers' experience (r = 0.246), size of land under cultivation (r = 0.186), income (r = 0.227), and social contribution activities (r = 0.224). The results of multiple regression analyses showed that the variation in the dependent variable depended on the farmers' experience in agricultural activities and their social contribution activities. This means that the variables included in the regression analysis are estimated to explain 12 percent of the variation in the dependent variable.Keywords: multifunctional agriculture, attitude, effective factor, sustainable agriculture
Procedia PDF Downloads 2371146 Analysis of Ionosphere Anomaly Before Great Earthquake in Java on 2009 Using GPS Tec Data
Authors: Aldilla Damayanti Purnama Ratri, Hendri Subakti, Buldan Muslim
Abstract:
Ionosphere’s anomalies as an effect of earthquake activity is a phenomenon that is now being studied in seismo-ionospheric coupling. Generally, variation in the ionosphere caused by earthquake activity is weaker than the interference generated by different source, such as geomagnetic storms. However, disturbances of geomagnetic storms show a more global behavior, while the seismo-ionospheric anomalies occur only locally in the area which is largely determined by magnitude of the earthquake. It show that the earthquake activity is unique and because of its uniqueness it has been much research done thus expected to give clues as early warning before earthquake. One of the research that has been developed at this time is the approach of seismo-ionospheric-coupling. This study related the state in the lithosphere-atmosphere and ionosphere before and when earthquake occur. This paper choose the total electron content in a vertical (VTEC) in the ionosphere as a parameter. Total Electron Content (TEC) is defined as the amount of electron in vertical column (cylinder) with cross-section of 1m2 along GPS signal trajectory in ionosphere at around 350 km of height. Based on the analysis of data obtained from the LAPAN agency to identify abnormal signals by statistical methods, obtained that there are an anomaly in the ionosphere is characterized by decreasing of electron content of the ionosphere at 1 TECU before the earthquake occurred. Decreasing of VTEC is not associated with magnetic storm that is indicated as an earthquake precursor. This is supported by the Dst index showed no magnetic interference.Keywords: earthquake, DST Index, ionosphere, seismoionospheric coupling, VTEC
Procedia PDF Downloads 5871145 External Noise Distillation in Quantum Holography with Undetected Light
Authors: Sebastian Töpfer, Jorge Fuenzalida, Marta Gilaberte Basset, Juan P. Torres, Markus Gräfe
Abstract:
This work presents an experimental and theoretical study about the noise resilience of quantum holography with undetected photons. Quantum imaging has become an important research topic in the recent years after its first publication in 2014. Following this research, advances towards different spectral ranges in detection and different optical geometries have been made. Especially an interest in the field of near infrared to mid infrared measurements has developed, because of the unique characteristic, that allows to sample a probe with photons in a different wavelength than the photons arriving at the detector. This promising effect can be used for medical applications, to measure in the so-called molecule fingerprint region, while using broadly available detectors for the visible spectral range. Further advance the development of quantum imaging methods have been made by new measurement and detection schemes. One of which is quantum holography with undetected light. It combines digital phase shifting holography with quantum imaging to extent the obtainable sample information, by measuring not only the object transmission, but also its influence on the phase shift experienced by the transmitted light. This work will present extended research for the quantum holography with undetected light scheme regarding the influence of external noise. It is shown experimentally and theoretically that the samples information can still be at noise levels of 250 times higher than the signal level, because of its information being transmitted by the interferometric pattern. A detailed theoretic explanation is also provided.Keywords: distillation, quantum holography, quantum imaging, quantum metrology
Procedia PDF Downloads 791144 A Review on Benzo(a)pyrene Emission Factors from Biomass Combustion
Authors: Franziska Klauser, Manuel Schwabl, Alexander Weissinger, Christoph Schmidl, Walter Haslinger, Anne Kasper-Giebl
Abstract:
Benzo(a)pyrene (BaP) is the most widely investigated representative of Polycyclic Aromatic Hydrocarbons (PAH) as well as one of the most toxic compounds in this group. Since 2013 in the European Union a limit value for BaP concentration in the ambient air is applied, which was set to a yearly average value of 1 ng m-3. Several reports show that in some regions, even where industry and traffic are of minor impact this threshold is regularly exceeded. This is taken as proof that biomass combustion for heating purposes contributes significantly to BaP pollution. Several investigations have been already carried out on the BaP emission behavior of biomass combustion furnaces, mostly focusing on a certain aspect like the influences from wood type, of operation type or of technology type. However, a superior view on emission patterns of BaP from biomass combustion and the aggregation of determined values also from recent studies is not presented so far. The combination of determined values allows a better understanding of the BaP emission behavior from biomass combustion. In this work the review conclusions are driven from the combination of outcomes from different publication. In two examples it was shown that technical progress leads to 10 to 100 fold lower BaP emission from modern furnaces compared to old technologies of equivalent type. It was also indicated that the operation with pellets or wood chips exhibits clearly lower BaP emission factors compared to operation with log wood. Although, the BaP emission level from automatic furnaces is strongly impacted by the kind of operation. This work delivers an overview on BaP emission factors from different biomass combustion appliances, from different operation modes and from the combustion of different fuel and wood types. The main impact factors are depicted, and suggestions for low BaP emission biomass combustion are derived. As one result possible investigation fields concerning BaP emissions from biomass combustion that seem to be most important to be clarified are suggested.Keywords: benzo(a)pyrene, biomass, combustion, emission, pollution
Procedia PDF Downloads 3591143 GIS Model for Sanitary Landfill Site Selection Based on Geotechnical Parameters
Authors: Hecson Christian, Joel Macwan
Abstract:
Landfill site selection in an urban area is a critical issue in the planning process. With the growth of the urbanization, it has a mammoth impact on the economy, ecology, and environmental health of the region. Outsized amount of wastes are produced and the problem gets soared every day. Hence, selection of ideal site for sanitary landfill is a challenge for urban planners and solid waste managers. Disposal site is a function of many parameters. Among all, Geotechnical parameters are very vital as the same is related to surrounding open land. Moreover, the accessible safe and acceptable land is also scarce. Therefore, in this paper geotechnical parameters are used to develop a GIS model to identify an ideal location for landfill purpose. Metropolitan city of Surat is highly populated and fastest growing urban area in India. The research objectives are to conduct field experiments to collect data and to transfer the facts in GIS platform to evolve a model, to find ideal location. Planners’ preferences were obtained to use analytical hierarchical process (AHP) to find weights of each parameter. Integration of GIS and Multi-Criteria Decision Analysis (MCDA) techniques are applied to improve decision-making. It augments an environment for transformation and combination of geographical data and planners’ preferences. GIS performs deterministic overlay and buffer operations. MCDA methods evaluate alternatives based on the decision makers’ subjective values and priorities. Research results have shown many alternative locations. Economic analysis of selected site from actual operations point of view is not included in this research.Keywords: GIS, AHP, MCDA, Geo-technical
Procedia PDF Downloads 1471142 The Impact of Generative AI Illustrations on Aesthetic Symbol Consumption among Consumers: A Case Study of Japanese Anime Style
Authors: Han-Yu Cheng
Abstract:
This study aims to explore the impact of AI-generated illustration works on the aesthetic symbol consumption of consumers in Taiwan. The advancement of artificial intelligence drawing has lowered the barriers to entry, enabling more individuals to easily enter the field of illustration. Using Japanese anime style as an example, with the development of Generative Artificial Intelligence (Generative AI), an increasing number of illustration works are being generated by machines, sparking discussions about aesthetics and art consumption. Through surveys and the analysis of consumer perspectives, this research investigates how this influences consumers' aesthetic experiences and the resulting changes in the traditional art market and among creators. The study reveals that among consumers in Taiwan, particularly those interested in Japanese anime style, there is a pronounced interest and curiosity surrounding the emergence of Generative AI. This curiosity is particularly notable among individuals interested in this style but lacking the technical skills required for creating such artworks. These works, rooted in elements of Japanese anime style, find ready acceptance among enthusiasts of this style due to their stylistic alignment. Consequently, they have garnered a substantial following. Furthermore, with the reduction in entry barriers, more individuals interested in this style but lacking traditional drawing skills have been able to participate in producing such works. Against the backdrop of ongoing debates about artistic value since the advent of artificial intelligence (AI), Generative AI-generated illustration works, while not entirely displacing traditional art, to a certain extent, fulfill the aesthetic demands of this consumer group, providing a similar or analogous aesthetic consumption experience. Additionally, this research underscores the advantages and limitations of Generative AI-generated illustration works within this consumption environment.Keywords: generative AI, anime aesthetics, Japanese anime illustration, art consumption
Procedia PDF Downloads 751141 The Relationship of Creativity and Innovation in Artistic Work and Their Importance in Improving the Artistic Organizational Performance
Authors: Houyem Kotti
Abstract:
The development in societies requires that these societies are continuously changing in various aspects, a change that requires continuous adaptation to the data of the technical age. In order for the individual to perform his/her duty or task in a perfect way, it is necessary to provide all the basic requirements and necessities to increase the efficiency and effectiveness of the personnel working to accomplish their tasks, requirements, and work successfully. The success of the industries and organizations are linked to the need to create individuals in the creative and innovative field. Formation process is considered an economic development and social prosperity, and to improve the quantity and quality of artistic work. Therefore, creativity and innovation play an important role in improving the performance of the artistic organization as it is one of the variables affecting the organization's ability to grow and invest. In order to provide better services to their customers, especially in the face of competition and traditional methods of work, and in an environment that discourages and hinders creativity and impairs any process of development, change or creative behavior. The research methodology that will be performed for this study is described as qualitative by conducting several interviews with artistic people, experts in the artistic field and reviewing the related literature to collect the necessary and required qualitative data from secondary sources such as statistical reports, previous research studies, etc. In this research, we will attempt to clarify the relationship between innovation and its importance in the artistic organization, the conditions of achieving innovation and its constraints, barriers, and challenges. The creativity and innovation and their impacts on the performance of artistic organizations, explaining this mechanism, so as to ensure continuity of these organizations and keeping pace with developments in the global economic environment.Keywords: artistic work, creativity and innovation, artistic organization, performance
Procedia PDF Downloads 2481140 Comparison of Two Anesthetic Methods during Interventional Neuroradiology Procedure: Propofol versus Sevoflurane Using Patient State Index
Authors: Ki Hwa Lee, Eunsu Kang, Jae Hong Park
Abstract:
Background: Interventional neuroradiology (INR) has been a rapidly growing and evolving neurosurgical part during the past few decades. Sevoflurane and propofol are both suitable anesthetics for INR procedure. Monitoring of depth of anesthesia is being used very widely. SEDLine™ monitor, a 4-channel processed EEG monitor, uses a proprietary algorithm to analyze the raw EEG signal and displays the Patient State Index (PSI) values. There are only a fewer studies examining the PSI in the neuro-anesthesia. We aimed to investigate the difference of PSI values and hemodynamic variables between sevoflurane and propofol anesthesia during INR procedure. Methods: We reviewed the medical records of patients who scheduled to undergo embolization of non-ruptured intracranial aneurysm by a single operator from May 2013 to December 2014, retrospectively. Sixty-five patients were categorized into two groups; sevoflurane (n = 33) vs propofol (n = 32) group. The PSI values, hemodynamic variables, and the use of hemodynamic drugs were analyzed. Results: Significant differences were seen between PSI values obtained during different perioperative stages in both two groups (P < 0.0001). The PSI values of propofol group were lower than that of sevoflurane group during INR procedure (P < 0.01). The patients in propofol group had more prolonged time of extubation and more phenylephrine requirement than sevoflurane group (p < 0.05). Anti-hypertensive drug was more administered to the patients during extubation in sevoflurane group (p < 0.05). Conclusions: The PSI can detect depth of anesthesia and changes of concentration of anesthetics during INR procedure. Extubation was faster in sevoflurane group, but smooth recovery was shown in propofol group.Keywords: interventional neuroradiology, patient state index, propofol, sevoflurane
Procedia PDF Downloads 1821139 Determination of the Runoff Coefficient in Urban Regions, an Example from Haifa, Israel
Authors: Ayal Siegel, Moshe Inbar, Amatzya Peled
Abstract:
This study examined the characteristic runoff coefficient in different urban areas. The main area studied is located in the city of Haifa, northern Israel. Haifa spreads out eastward from the Mediterranean seacoast to the top of the Carmel Mountain range with an elevation of 300 m. above sea level. For this research project, four watersheds were chosen, each characterizing a different part of the city; 1) Upper Hadar, a spacious suburb on the upper mountain side; 2) Qiryat Eliezer, a crowded suburb on a level plane of the watershed; 3) Technion, a large technical research university which is located halfway between the top of the mountain range and the coast line. 4) Keret, a remote suburb, on the southwestern outskirts of Haifa. In all of the watersheds found suitable, instruments were installed to continuously measure the water level flowing in the channels. Three rainfall gauges scattered in the study area complete the hydrological requirements for this research project. The runoff coefficient C in peak discharge events was determined by the Rational Formula. The main research finding is the significant relationship between the intensity of rainfall, and the impervious area which is connected to the drainage system of the watershed. For less intense rainfall, the full potential of the connected impervious area will not be exploited. As a result, the runoff coefficient value decreases as do the peak discharge rate and the runoff yield from the storm event. The research results will enable application to other areas by means of hydrological model to be be set up on GIS software that will make it possible to estimate the runoff coefficient of any given city watershed.Keywords: runoff coefficient, rational method, time of concentration, connected impervious area.
Procedia PDF Downloads 3511138 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery
Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori
Abstract:
The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS
Procedia PDF Downloads 1931137 Thermal Vacuum Chamber Test Result for CubeSat Transmitter
Authors: Fitri D. Jaswar, Tharek A. Rahman, Yasser A. Ahmad
Abstract:
CubeSat in low earth orbit (LEO) mainly uses ultra high frequency (UHF) transmitter with fixed radio frequency (RF) output power to download the telemetry and the payload data. The transmitter consumes large amount of electrical energy during the transmission considering the limited satellite size of a CubeSat. A transmitter with power control ability is designed to achieve optimize the signal to noise ratio (SNR) and efficient power consumption. In this paper, the thermal vacuum chamber (TVAC) test is performed to validate the performance of the UHF band transmitter with power control capability. The TVAC is used to simulate the satellite condition in the outer space environment. The TVAC test was conducted at the Laboratory of Spacecraft Environment Interaction Engineering, Kyushu Institute of Technology, Japan. The TVAC test used 4 thermal cycles starting from +60°C to -20°C for the temperature setting. The pressure condition inside chamber was less than 10-5Pa. During the test, the UHF transmitter is integrated in a CubeSat configuration with other CubeSat subsystem such as on board computer (OBC), power module, and satellite structure. The system is validated and verified through its performance in terms of its frequency stability and the RF output power. The UHF band transmitter output power is tested from 0.5W to 2W according the satellite mode of operations and the satellite power limitations. The frequency stability is measured and the performance obtained is less than 2 ppm in the tested operating temperature range. The test demonstrates the RF output power is adjustable in a thermal vacuum condition.Keywords: communication system, CubeSat, SNR, UHF transmitter
Procedia PDF Downloads 2651136 The Fundamental Research and Industrial Application on CO₂+O₂ in-situ Leaching Process in China
Authors: Lixin Zhao, Genmao Zhou
Abstract:
Traditional acid in-situ leaching (ISL) is not suitable for the sandstone uranium deposit with low permeability and high content of carbonate minerals, because of the blocking of calcium sulfate precipitates. Another factor influences the uranium acid in-situ leaching is that the pyrite in ore rocks will react with oxidation reagent and produce lots of sulfate ions which may speed up the precipitation process of calcium sulphate and consume lots of oxidation reagent. Due to the advantages such as less chemical reagent consumption and groundwater pollution, CO₂+O₂ in-situ leaching method has become one of the important research areas in uranium mining. China is the second country where CO₂+O₂ ISL has been adopted in industrial uranium production of the world. It is shown that the CO₂+O₂ ISL in China has been successfully developed. The reaction principle, technical process, well field design and drilling engineering, uranium-bearing solution processing, etc. have been fully studied. At current stage, several uranium mines use CO₂+O₂ ISL method to extract uranium from the ore-bearing aquifers. The industrial application and development potential of CO₂+O₂ ISL method in China are summarized. By using CO₂+O₂ neutral leaching technology, the problem of calcium carbonate and calcium sulfate precipitation have been solved during uranium mining. By reasonably regulating the amount of CO₂ and O₂, related ions and hydro-chemical conditions can be controlled within the limited extent for avoiding the occurrence of calcium sulfate and calcium carbonate precipitation. Based on this premise, the demand of CO₂+O₂ uranium leaching has been met to the maximum extent, which not only realizes the effective leaching of uranium, but also avoids the occurrence and precipitation of calcium carbonate and calcium sulfate, realizing the industrial development of the sandstone type uranium deposit.Keywords: CO₂+O₂ ISL, industrial production, well field layout, uranium processing
Procedia PDF Downloads 1781135 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Authors: Nikolay P. Brayanov, Anna V. Stoynova
Abstract:
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development
Procedia PDF Downloads 2451134 Comfort Needs and Energy Practices in Low-Income, Tropical Housing from a Socio-Technical Perspective
Authors: Tania Sharmin
Abstract:
Energy use, overheating and thermal discomfort in low-income tropical housing remains an under-researched area. This research attempts to explore these aspects in the Loving Community, a housing colony created for former leprosy patients and their families in Ahmedabad in India. The living conditions in these households and working practices of the inhabitants in terms of how the building and its internal and external spaces are used, will be explored through interviews and monitoring which will be based on a household survey and a focus group discussion (FGD). The findings from the study will provide a unique and in-depth account of how the relocation of the affected households to the new, flood-resistant and architecturally-designed buildings may have affected the dwellers’ household routines (health and well-being, comfort, satisfaction and working practices) and overall living conditions compared to those living in poorly-designed, existing low-income housings. The new houses were built under an innovative building project supported by De Montfort University Leicester (DMU)’s Square Mile India project. A comparison of newly-built and existing building typologies will reveal how building design can affect people’s use of space and energy use. The findings will be helpful to design healthier, energy efficient and socially acceptable low-income housing in future, thus addressing United Nation’s sustainable development goals on three aspects: 3 (health and well-being), 7 (energy) and 11 (safe, resilient and sustainable human settlements). This will further facilitate knowledge exchange between policy makers, developers, designers and occupants focused on strategies to increase stakeholders’ participation in the design process.Keywords: thermal comfort, energy use, low-income housing, tropical climate
Procedia PDF Downloads 1241133 Reliability Modeling on Drivers’ Decision during Yellow Phase
Authors: Sabyasachi Biswas, Indrajit Ghosh
Abstract:
The random and heterogeneous behavior of vehicles in India puts up a greater challenge for researchers. Stop-and-go modeling at signalized intersections under heterogeneous traffic conditions has remained one of the most sought-after fields. Vehicles are often caught up in the dilemma zone and are unable to take quick decisions whether to stop or cross the intersection. This hampers the traffic movement and may lead to accidents. The purpose of this work is to develop a stop and go prediction model that depicts the drivers’ decision during the yellow time at signalised intersections. To accomplish this, certain traffic parameters were taken into account to develop surrogate model. This research investigated the Stop and Go behavior of the drivers by collecting data from 4-signalized intersections located in two major Indian cities. Model was developed to predict the drivers’ decision making during the yellow phase of the traffic signal. The parameters used for modeling included distance to stop line, time to stop line, speed, and length of the vehicle. A Kriging base surrogate model has been developed to investigate the drivers’ decision-making behavior in amber phase. It is observed that the proposed approach yields a highly accurate result (97.4 percent) by Gaussian function. It was observed that the accuracy for the crossing probability was 95.45, 90.9 and 86.36.11 percent respectively as predicted by the Kriging models with Gaussian, Exponential and Linear functions.Keywords: decision-making decision, dilemma zone, surrogate model, Kriging
Procedia PDF Downloads 3101132 Harmonic Mitigation and Total Harmonic Distortion Reduction in Grid-Connected PV Systems: A Case Study Using Real-Time Data and Filtering Techniques
Authors: Atena Tazikeh Lemeski, Ismail Ozdamar
Abstract:
This study presents a detailed analysis of harmonic distortion in a grid-connected photovoltaic (PV) system using real-time data captured from a solar power plant. Harmonics introduced by inverters in PV systems can degrade power quality and lead to increased Total Harmonic Distortion (THD), which poses challenges such as transformer overheating, increased power losses, and potential grid instability. This research addresses these issues by applying Fast Fourier Transform (FFT) to identify significant harmonic components and employing notch filters to target specific frequencies, particularly the 3rd harmonic (150 Hz), which was identified as the largest contributor to THD. Initial analysis of the unfiltered voltage signal revealed a THD of 21.15%, with prominent harmonic peaks at 150 Hz, 250 Hz and 350 Hz, corresponding to the 3rd, 5th, and 7th harmonics, respectively. After implementing the notch filters, the THD was reduced to 5.72%, demonstrating the effectiveness of this approach in mitigating harmonic distortion without affecting the fundamental frequency. This paper provides practical insights into the application of real-time filtering techniques in PV systems and their role in improving overall grid stability and power quality. The results indicate that targeted harmonic mitigation is crucial for the sustainable integration of renewable energy sources into modern electrical grids.Keywords: grid-connected photovoltaic systems, fast Fourier transform, harmonic filtering, inverter-induced harmonics
Procedia PDF Downloads 411131 Efficient Delivery of Biomaterials into Living Organism by Using Noble Metal Nanowire Injector
Authors: Kkochorong Park, Keun Cheon Kim, Hyoban Lee, Eun Ju Lee, Bongsoo Kim
Abstract:
Introduction of biomaterials such as DNA, RNA, proteins is important for many research areas. There are many methods to introduce biomaterials into living organisms like tissue and cells. To introduce biomaterials, several indirect methods including virus‐mediated delivery, chemical reagent (i.e., lipofectamine), electrophoresis have been used. Such methods are passive delivery using an endocytosis process of cell, reducing an efficiency of delivery. Unlike the indirect delivery method, it has been reported that a direct delivery of exogenous biomolecules into nucleus have been more efficient to expression or integration of biomolecules. Nano-sized material is beneficial for detect signal from cell or deliver stimuli/materials into the cell at cellular and molecular levels, due to its similar physical scale. Especially, because 1 dimensional (1D) nanomaterials such as nanotube, nanorod and nanowire with high‐aspect ratio have nanoscale geometry and excellent mechanical, electrical, and chemical properties, they could play an important role in molecular and cellular biology. In this study, by using single crystalline 1D noble metal nanowire, we fabricated nano-sized 1D injector which can successfully interface with living cells and directly deliver biomolecules into several types of cell line (i.e., stem cell, mammalian embryo) without inducing detrimental damages on living cell. This nano-bio technology could be a promising and robust tool for introducing exogenous biomaterials into living organism.Keywords: DNA, gene delivery, nanoinjector, nanowire
Procedia PDF Downloads 2751130 Highly Accurate Target Motion Compensation Using Entropy Function Minimization
Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani
Abstract:
One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.Keywords: automatic target recognition (ATR), high resolution range profile (HRRP), motion compensation, stepped frequency waveform technique (SFW), target motion parameters (TMPs)
Procedia PDF Downloads 153