Search results for: real estate price prediction
1756 Physical-Mechanical Characteristics of Monocrystalline Si1-xGex(X 0,02) Solid Solutions
Authors: I. Kurashvili, A. Sichinava, G. Bokuchava, G. Darsavelidze
Abstract:
Si-Ge solid solutions (bulk poly- and monocrystalline samples, thin films) are characterized by high perspectives for application in semiconductor devices, in particular, optoelectronics and microelectronics. In this light complex studying of structural state of the defects and structural-sensitive physical properties of Si-Ge solid solutions depending on the contents of Si and Ge components is very important. Present work deals with the investigations of microstructure, electrophysical characteristics, microhardness, internal friction and shear modulus of Si1-xGex(x≤0,02) bulk monocrystals conducted at a room temperatures. Si-Ge bulk crystals were obtained by Czochralski method in [111] crystallographic direction. Investigated monocrystalline Si-Ge samples are characterized by p-type conductivity and carriers concentration 5.1014-1.1015cm-3, dislocation density 5.103-1.104cm-2, microhardness according to Vickers method 900-1200 Kg/mm2. Investigate samples are characterized with 0,5x0,5x(10-15) mm3 sizes, oriented along [111] direction at torsion oscillations ≈1Hz, multistage changing of internal friction and shear modulus has been revealed in an interval of strain amplitude of 10-5-5.10-3. Critical values of strain amplitude have been determined at which hysteretic changes of inelastic characteristics and microplasticity are observed. The critical strain amplitude and elasticity limit values are also determined. Tendency to decrease of dynamic mechanical characteristics is shown with increasing Ge content in Si-Ge solid solutions. Observed changes are discussed from the point of view of interaction of various dislocations with point defects and their complexes in a real structure of Si-Ge solid solutions.Keywords: Microhardness, internal friction, shear modulus, Monocrystalline
Procedia PDF Downloads 3521755 Policy Effectiveness in the Situation of Economic Recession
Authors: S. K. Ashiquer Rahman
Abstract:
The proper policy handling might not able to attain the target since some of recessions, e.g., pandemic-led crises, the variables shocks of the economics. At the level of this situation, the Central bank implements the monetary policy to choose increase the exogenous expenditure and level of money supply consecutively for booster level economic growth, whether the monetary policy is relatively more effective than fiscal policy in altering real output growth of a country or both stand for relatively effective in the direction of output growth of a country. The dispute with reference to the relationship between the monetary policy and fiscal policy is centered on the inflationary penalty of the shortfall financing by the fiscal authority. The latest variables socks of economics as well as the pandemic-led crises, central banks around the world predicted just about a general dilemma in relation to increase rates to face the or decrease rates to sustain the economic movement. Whether the prices hang about fundamentally unaffected, the aggregate demand has also been hold a significantly negative attitude by the outbreak COVID-19 pandemic. To empirically investigate the effects of economics shocks associated COVID-19 pandemic, the paper considers the effectiveness of the monetary policy and fiscal policy that linked to the adjustment mechanism of different economic variables. To examine the effects of economics shock associated COVID-19 pandemic towards the effectiveness of Monetary Policy and Fiscal Policy in the direction of output growth of a Country, this paper uses the Simultaneous equations model under the estimation of Two-Stage Least Squares (2SLS) and Ordinary Least Squares (OLS) Method.Keywords: IS-LM framework, pandemic. Economics variables shocks, simultaneous equations model, output growth
Procedia PDF Downloads 951754 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema and Interface for Mapping and Communication
Authors: Devanjan Bhattacharya, Jitka Komarkova
Abstract:
The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before–through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.Keywords: geospatial, web-based GIS, geohazard, warning system
Procedia PDF Downloads 4081753 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 3601752 Using the Micro Computed Tomography to Study the Corrosion Behavior of Magnesium Alloy at Different pH Values
Authors: Chia-Jung Chang, Sheng-Che Chen, Ming-Long Yeh, Chih-Wei Wang, Chih-Han Chang
Abstract:
Introduction and Motivation: In recent years, magnesium alloy is used to be a kind of medical biodegradable materials. Magnesium is an essential element in the body and is efficiently excreted by the kidneys. Furthermore, the mechanical properties of magnesium alloy is closest to human bone. However, in some cases magnesium alloy corrodes so quickly that it would release hydrogen on surface of implant. The other product is hydroxide ion, it can significantly increase the local pH value. The above situations may have adverse effects on local cell functions. On the other hand, nowadays magnesium alloy corrode too fast to maintain the function of implant until the healing of tissue. Therefore, much recent research about magnesium alloy has focused on controlling the corrosion rate. The in vitro corrosion behavior of magnesium alloys is affected by many factors, and pH value is one of factors. In this study, we will study on the influence of pH value on the corrosion behavior of magnesium alloy by the Micro-CT (micro computed tomography) and other instruments.Material and methods: In the first step, we make some guiding plates for specimens of magnesium alloy AZ91 by Rapid Prototyping. The guiding plates are able to be a standard for the degradation of specimen, so that we can use it to make sure the position of specimens in the CT image. We can also simplify the conditions of degradation by the guiding plates.In the next step, we prepare the solution with different pH value. And then we put the specimens into the solution to start the corrosion test. The CT image, surface photographs and weigh are measured on every twelve hours. Results: In the primary results of the test, we make sure that CT image can be a way to quantify the corrosion behavior of magnesium alloy. Moreover we can observe the phenomenon that corrosion always start from some erosion point. It’s possibly based on some defect like dislocations and the voids with high strain energy in the materials. We will deal with the raw data into Mass Loss (ML) and corrosion rate by CT image, surface photographs and weigh in the near future. Having a simple prediction, the pH value and degradation rate will be negatively correlated. And we want to find out the equation of the pH value and corrosion rate. We also have a simple test to simulate the change of the pH value in the local region. In this test the pH value will rise to 10 in a short time. Conclusion: As a biodegradable implant for the area with stagnating body fluid flow in the human body, magnesium alloy can cause the increase of local pH values and release the hydrogen. Those may damage the human cell. The purpose of this study is finding out the equation of the pH value and corrosion rate. After that we will try to find the ways to overcome the limitations of medical magnesium alloy.Keywords: magnesium alloy, biodegradable materials, corrosion, micro-CT
Procedia PDF Downloads 4571751 Effect of Surface Treatments on the Cohesive Response of Nylon 6/silica Interfaces
Authors: S. Arabnejad, D. W. C. Cheong, H. Chaobin, V. P. W. Shim
Abstract:
Debonding is the one of the fundamental damage mechanisms in particle field composites. This phenomenon gains more importance in nano composites because of the extensive interfacial region present in these materials. Understanding the debonding mechanism accurately, can help in understanding and predicting the response of nano composites as the interface deteriorates. The small length scale of the phenomenon makes the experimental characterization complicated and the results of it, far from real physical behavior. In this study the damage process in nylon-6/silica interface is examined through Molecular Dynamics (MD) modeling and simulations. The silica has been modeled with three forms of surfaces – without any surface treatment, with the surface treatment of 3-aminopropyltriethoxysilane (APTES) and with Hexamethyldisilazane (HMDZ) surface treatment. The APTES surface modification used to create functional groups on the silica surface, reacts and form covalent bonds with nylon 6 chains while the HMDZ surface treatment only interacts with both particle and polymer by non-bond interaction. The MD model in this study uses a PCFF force field. The atomic model is generated in a periodic box with a layer of vacuum on top of the polymer layer. This layer of vacuum is large enough that assures us from not having any interaction between particle and substrate after debonding. Results show that each of these three models show a different traction separation behavior. However, all of them show an almost bilinear traction separation behavior. The study also reveals a strong correlation between the length of APTES surface treatment and the cohesive strength of the interface.Keywords: debonding, surface treatment, cohesive response, separation behaviour
Procedia PDF Downloads 4601750 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling
Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani
Abstract:
The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.Keywords: material point method, woven fabric composites, forming, material handling
Procedia PDF Downloads 1811749 Smart Campus Digital Twin: Basic Framework - Current State, Trends and Challenges
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study presents an analysis of the Digital Twin concept applied to the academic environment, focusing on the development of a Digital Twin Smart Campus Framework. Using bibliometric analysis methodologies and literature review, the research investigates the evolution and applications of the Digital Twin in educational contexts, comparing these findings with the advances of Industry 4.0. It was identified gaps in the existing literature and highlighted the need to adapt Digital Twin principles to meet the specific demands of a smart campus. By integrating Industry 4.0 concepts such as automation, Internet of Things, and real-time data analytics, we propose an innovative framework for the successful implementation of the Digital Twin in academic settings. The results of this study provide valuable insights for university campus managers, allowing for a better understanding of the potential applications of the Digital Twin for operations, security, and user experience optimization. In addition, our framework offers practical guidance for transitioning from a digital campus to a digital twin smart campus, promoting innovation and efficiency in the educational environment. This work contributes to the growing literature on Digital Twins and Industry 4.0, while offering a specific and tailored approach to transforming university campuses into smart and connected spaces, high demanded by Society 5.0 trends. It is hoped that this framework will serve as a basis for future research and practical implementations in the field of higher education and educational technology.Keywords: smart campus, digital twin, industry 4.0, education trends, society 5.0
Procedia PDF Downloads 591748 Improving Customer Service through Empathy
Authors: Abiola Olukemi Ogunyemi
Abstract:
Many organizations would like to gain customer loyalty, and to achieve this they invest in customer management systems which help them to learn and anticipate the customers’ needs, get feedback from them and serve them. One of the most elementary ways to achieve customer loyalty is for employees to be able to empathize with their customers, and to be able to feel what they feel when the company betrays their trust, which usually otherwise shown in patronage and loyalty. Unfortunately, the staff and management of organizations do not always realize the negative impact of treating customers badly, because they do not stop to think how these customers feel. If they did, they would be more careful and more respectful of these people who are human beings just like they are. They would be wiser, since this would ultimately make them more profitable businesses. This paper looks at thirteen descriptions of situations in which customers felt treated badly by organizations they trusted, and focuses on the feelings of these customers. If the organization (made of people) could empathize with the customer, then customer service would be surely enhanced. It is expected that these stories, real experiences narrated by young professionals working in Nigeria, can awaken greater empathy for consumers within organizations. Thus, they may help the organization to learn empathy and to incorporate it into their foundational principles for ethical behavior. The paper’s contents contribute to a heightened appreciation of empathy as an organizing mechanism by showing how putting one in the consumer’s shoes can help managers to understand how he or she feels. This will lead organizations to be even more innovative in finding ways to meet their customers’ needs and to deserve and win their loyalty. It addresses an issue that cuts across cultures, and therefore can be quite thought-provoking for every business owner or for team leads within organizations. By trying to stimulate empathy across the seller-buyer divide, it necessarily contributes to a deeper understanding of empathy as a building block for a sustainable society.Keywords: customer service, empathy, ethical behavior, respectfulness
Procedia PDF Downloads 2591747 Effect of Porous Multi-Layer Envelope System on Effective Wind Pressure of Building Ventilation
Authors: Ying-Chang Yu, Yuan-Lung Lo
Abstract:
Building ventilation performance is an important indicator of indoor comfort. However, in addition to the geometry of the building or the proportion of the opening, the ventilation performance is also very much related to the actual wind pressure of the building. There are more and more contemporary building designs built with multi-layer exterior envelope. Due to ventilation and view observatory requirement, the porous outer layer of the building is commonly adopted and has a significant wind damping effect, causing the phenomenon of actual wind pressure loss. However, the relationship between the wind damping effect and the actual wind pressure is not linear. This effect can make the indoor ventilation of the building rationalized to reasonable range under the condition of high wind pressure, and also maintain a good amount of ventilation performance under the condition of low wind pressure. In this study, wind tunnel experiments were carried out to simulate the different wind pressures flow through the porous outer layer, and observe the actual wind pressure strength engage with the window layer to find the decreasing relationship between the damping effect of the porous shell and the wind pressure. Experiment specimen scale was designed to be 1:50 for testing real-world building conditions; the study found that the porous enclosure has protective shielding without affecting low-pressure ventilation. Current study observed the porous skin may damp more wind energy to ease the wind pressure under high-speed wind. Differential wind speed may drop the pressure into similar pressure level by using porous skin. The actual mechanism and value of this phenomenon will need further study in the future.Keywords: multi-layer facade, porous media, wind damping, wind tunnel test, building ventilation
Procedia PDF Downloads 1491746 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 4691745 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 381744 The Visual Side of Islamophobia: A Social-Semiotic Analysis
Authors: Carmen Aguilera-Carnerero
Abstract:
Islamophobia, the unfounded hostility towards Muslims and Islam, has been deeply studied in the last decades from different perspectives ranging from anthropology, sociology, media studies, and linguistics. In the past few years, we have witnessed how the birth of social media has transformed formerly passive audiences into an active group that not only receives and digests information but also creates and comments publicly on any event of their interest. In this way, average citizens now have been entitled with the power of becoming potential opinion leaders. This rise of social media in the last years gave way to a different way of Islamophobia, the so called ‘cyberIslamophobia’. Considerably less attention, however, has been given to the study of islamophobic images that accompany the texts in social media. This paper attempts to analyse a corpus of 300 images of islamophobic nature taken from social media (from Twitter and Facebook) from the years 2014-2017 to see: a) how hate speech is visually constructed, b) how cyberislamophobia is articulated through images and whether there are differences/similarities between the textual and the visual elements, c) the impact of those images in the audience and their reaction to it and d) whether visual cyberislamophobia has undergone any process of permeating popular culture (for example, through memes) and its real impact. To carry out this task, we have used Critical Discourse Analysis as the most suitable theoretical framework that analyses and criticizes the dominant discourses that affect inequality, injustice, and oppression. The analysis of images was studied according to the theoretical framework provided by the visual framing theory and the visual design grammar to conclude that memes are subtle but very powerful tools to spread Islamophobia and foster hate speech under the guise of humour within popular culture.Keywords: cyberIslamophobia, visual grammar, social media, popular culture
Procedia PDF Downloads 1671743 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection
Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada
Abstract:
With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.Keywords: machine learning, imbalanced data, data mining, big data
Procedia PDF Downloads 1301742 Optimum Performance of the Gas Turbine Power Plant Using Adaptive Neuro-Fuzzy Inference System and Statistical Analysis
Authors: Thamir K. Ibrahim, M. M. Rahman, Marwah Noori Mohammed
Abstract:
This study deals with modeling and performance enhancements of a gas-turbine combined cycle power plant. A clean and safe energy is the greatest challenges to meet the requirements of the green environment. These requirements have given way the long-time governing authority of steam turbine (ST) in the world power generation, and the gas turbine (GT) will replace it. Therefore, it is necessary to predict the characteristics of the GT system and optimize its operating strategy by developing a simulation system. The integrated model and simulation code for exploiting the performance of gas turbine power plant are developed utilizing MATLAB code. The performance code for heavy-duty GT and CCGT power plants are validated with the real power plant of Baiji GT and MARAFIQ CCGT plants the results have been satisfactory. A new technology of correlation was considered for all types of simulation data; whose coefficient of determination (R2) was calculated as 0.9825. Some of the latest launched correlations were checked on the Baiji GT plant and apply error analysis. The GT performance was judged by particular parameters opted from the simulation model and also utilized Adaptive Neuro-Fuzzy System (ANFIS) an advanced new optimization technology. The best thermal efficiency and power output attained were about 56% and 345MW respectively. Thus, the operation conditions and ambient temperature are strongly influenced on the overall performance of the GT. The optimum efficiency and power are found at higher turbine inlet temperatures. It can be comprehended that the developed models are powerful tools for estimating the overall performance of the GT plants.Keywords: gas turbine, optimization, ANFIS, performance, operating conditions
Procedia PDF Downloads 4251741 Evaluation of Produced Water Treatment Using Advanced Oxidation Processes and Sodium Ferrate(VI)
Authors: Erica T. R. Mendonça, Caroline M. B. de Araujo, Filho, Osvaldo Chiavone, Sobrinho, Maurício A. da Motta
Abstract:
Oil and gas exploration is an essential activity for modern society, although the supply of its global demand has caused enough damage to the environment, mainly due to produced water generation, which is an effluent associated with the oil and gas produced during oil extraction. It is the aim of this study to evaluate the treatment of produced water, in order to reduce its oils and greases content (OG), by using flotation as a pre-treatment, combined with oxidation for the remaining organic load degradation. Thus, there has been tested Advanced Oxidation Process (AOP) using both Fenton and photo-Fenton reactions, as well as a chemical oxidation treatment using sodium ferrate(VI), Na2[FeO4], as a strong oxidant. All the studies were carried out using real samples of produced water from petroleum industry. The oxidation process using ferrate(VI) ion was studied based on factorial experimental designs. The factorial design was used in order to study how the variables pH, temperature and concentration of Na2[FeO4] influences the O&G levels. For the treatment using ferrate(VI) ion, the results showed that the best operating point is obtained when the temperature is 28 °C, pH 3, and a 2000 mg.L-1 solution of Na2[FeO4] is used. This experiment has achieved a final O&G level of 4.7 mg.L-1, which means 94% percentage removal efficiency of oils and greases. Comparing Fenton and photo-Fenton processes, it was observed that the Fenton reaction did not provide good reduction of O&G (around 20% only). On the other hand, a degradation of approximately 80.5% of oil and grease was obtained after a period of seven hours of treatment using photo-Fenton process, which indicates that the best process combination has occurred between the flotation and the photo-Fenton reaction using solar radiation, with an overall removal efficiency of O&G of approximately 89%.Keywords: advanced oxidation process, ferrate (VI) ion, oils and greases removal, produced water treatment
Procedia PDF Downloads 3191740 Introduction to Multi-Agent Deep Deterministic Policy Gradient
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents
Procedia PDF Downloads 241739 Interdisciplinary Collaborative Innovation Mechanism for Sustainability Challenges
Authors: C. Park, H. Lee, Y-J. Lee
Abstract:
Aim: This study presents Interdisciplinary Collaborative Innovation Mechanism as a medium to enable the effective generation of innovations for sustainability challenges facing humanities. Background: Interdisciplinary approach of fusing disparate knowledge and perspectives from diverse expertise and subject areas is one of the key requirements to address the intricate nature of sustainability issues. There is a lack of rigorous empirical study of the systematic structure of interdisciplinary collaborative innovation for sustainability to date. Method: To address this research gap, the action research approach is adopted to develop the Interdisciplinary Collaborative Innovation Mechanism (ICIM) framework based on an empirical study of a total of 28 open innovation competitions in the format of MAKEathons between 2016 to 2023. First, the conceptual framework was formulated based on the literature findings, and the framework was subsequently tested and iterated. Outcomes: The findings provide the ICIM framework composed of five elements: Discipline Diversity Quadruple; Systematic Structure; Inspirational Stimuli; Supportive Collaboration Environment; and Hardware and Intellectual Support. The framework offers a discussion of the key elements when attempting to facilitate interdisciplinary collaboration for sustainability innovation. Contributions: This study contributes to two burgeoning areas of sustainable development and open innovation studies by articulating the concrete structure to bridge the gap. In practice, the framework helps facilitate effective innovation processes and positive social and environmental impact created for real-world sustainability challenges.Keywords: action research, interdisciplinary collaboration, open innovation, problem-solving, sustainable development, sustainability challenges
Procedia PDF Downloads 2471738 Implementation of Industrial Ecology Principles in the Production and Recycling of Solar Cells and Solar Modules
Authors: Julius Denafas, Irina Kliopova, Gintaras Denafas
Abstract:
Three opportunities for implementation of industrial ecology principles in the real industrial production of c-Si solar cells and modules are presented in this study. It includes: material flow dematerialisation, product modification and industrial symbiosis. Firstly, it is shown how the collaboration between R&D institutes and industry helps to achieve significant reduction of material consumption by a) refuse from phosphor silicate glass cleaning process and b) shortening of silicon nitride coating production step. Secondly, it was shown how the modification of solar module design can reduce the CO2 footprint for this product and enhance waste prevention. It was achieved by implementing a frameless glass/glass solar module design instead of glass/backsheet with aluminium frame. Such a design change is possible without purchasing new equipment and without loss of main product properties like efficiency, rigidity and longevity. Thirdly, industrial symbiosis in the solar cell production is possible in such case when manufacturing waste (silicon wafer and solar cell breakage) also used solar modules are collected, sorted and supplied as raw-materials to other companies involved in the production chain of c-Si solar cells. The obtained results showed that solar cells produced from recycled silicon can have a comparable electrical parameters like produced from standard, commercial silicon wafers. The above mentioned work was performed at solar cell producer Soli Tek R&D in the frame of H2020 projects CABRISS and Eco-Solar.Keywords: manufacturing, process optimisation, recycling, solar cells, solar modules, waste prevention
Procedia PDF Downloads 1421737 The Effect of Artificial Intelligence on Civil Engineering Outputs and Designs
Authors: Mina Youssef Makram Ibrahim
Abstract:
Engineering identity contributes to the professional and academic sustainability of female engineers. Recognizability is an important factor that shapes an engineer's identity. People who are deprived of real recognition often fail to create a positive identity. This study draws on Hornet’s recognition theory to identify factors that influence female civil engineers' sense of recognition. Over the past decade, a survey was created and distributed to 330 graduate students in the Department of Civil, Civil and Environmental Engineering at Iowa State University. Survey items include demographics, perceptions of a civil engineer's identity, and factors that influence recognition of a civil engineer's identity, such as B. Opinions about society and family. Descriptive analysis of survey responses revealed that perceptions of civil engineering varied significantly. The definitions of civil engineering provided by participants included the terms structure, design and infrastructure. Almost half of the participants said the main reason for studying Civil Engineering was their interest in the subject, and the majority said they were proud to be a civil engineer. Many study participants reported that their parents viewed them as civil engineers. Institutional and operational treatment was also found to have a significant impact on the recognition of women civil engineers. Almost half of the participants reported feeling isolated or ignored at work because of their gender. This research highlights the importance of recognition in developing the identity of women engineers.Keywords: civil service, hiring, merit, policing civil engineering, construction, surveying, mapping, pile civil service, Kazakhstan, modernization, a national model of civil service, civil service reforms, bureaucracy civil engineering, gender, identity, recognition
Procedia PDF Downloads 631736 Surface Modified Quantum Dots for Nanophotonics, Stereolithography and Hybrid Systems for Biomedical Studies
Authors: Redouane Krini, Lutz Nuhn, Hicham El Mard Cheol Woo Ha, Yoondeok Han, Kwang-Sup Lee, Dong-Yol Yang, Jinsoo Joo, Rudolf Zentel
Abstract:
To use Quantum Dots (QDs) in the two photon initiated polymerization technique (TPIP) for 3D patternings, QDs were modified on the surface with photosensitive end groups which are able to undergo a photopolymerization. We were able to fabricate fluorescent 3D lattice structures using photopatternable QDs by TPIP for photonic devices such as photonic crystals and metamaterials. The QDs in different diameter have different emission colors and through mixing of RGB QDs white light fluorescent from the polymeric structures has been created. Metamaterials are capable for unique interaction with the electrical and magnetic components of the electromagnetic radiation and for manipulating light it is crucial to have a negative refractive index. In combination with QDs via TPIP technique polymeric structures can be designed with properties which cannot be found in nature. This makes these artificial materials gaining a huge importance for real-life applications in photonic and optoelectronic. Understanding of interactions between nanoparticles and biological systems is of a huge interest in the biomedical research field. We developed a synthetic strategy of polymer functionalized nanoparticles for biomedical studies to obtain hybrid systems of QDs and copolymers with a strong binding network in an inner shell and which can be modified in the end through their poly(ethylene glycol) functionalized outer shell. These hybrid systems can be used as models for investigation of cell penetration and drug delivery by using measurements combination between CryoTEM and fluorescence studies.Keywords: biomedical study models, lithography, photo induced polymerization, quantum dots
Procedia PDF Downloads 5271735 Analysis of the Vibration Behavior of a Small-Scale Wind Turbine Blade under Johannesburg Wind Speed
Authors: Tolulope Babawarun, Harry Ngwangwa
Abstract:
The wind turbine blade may sustain structural damage from external loads such as high winds or collisions, which could compromise its aerodynamic efficiency. The wind turbine blade vibrates at significant intensities and amplitudes under these conditions. The effect of these vibrations on the dynamic flow field surrounding the blade changes the forces operating on it. The structural dynamic analysis of a small wind turbine blade is considered in this study. It entails creating a finite element model, validating the model, and doing structural analysis on the verified finite element model. The analysis is based on the structural reaction of a small-scale wind turbine blade to various loading sources. Although there are many small-scale off-shore wind turbine systems in use, only preliminary structural analysis is performed during design phases; these systems' performance under various loading conditions as they are encountered in real-world situations has not been properly researched. This will allow us to record the same Equivalent von Mises stress and deformation that the blade underwent. A higher stress contour was found to be more concentrated near the middle span of the blade under the various loading scenarios studied. The highest stress that the blade in this study underwent is within the range of the maximum stress that blade material can withstand. The maximum allowable stress of the blade material is 1,770 MPa. The deformation of the blade was highest at the blade tip. The critical speed of the blade was determined to be 4.3 Rpm with a rotor speed range of 0 to 608 Rpm. The blade's mode form under loading conditions indicates a bending mode, the most prevalent of which is flapwise bending.Keywords: ANSYS, finite element analysis, static loading, dynamic analysis
Procedia PDF Downloads 871734 Effect of Packaging Material and Water-Based Solutions on Performance of Radio Frequency Identification for Food Packaging Applications
Authors: Amelia Frickey, Timothy (TJ) Sheridan, Angelica Rossi, Bahar Aliakbarian
Abstract:
The growth of large food supply chains demanded improved end-to-end traceability of food products, which has led to companies being increasingly interested in using smart technologies such as Radio Frequency Identification (RFID)-enabled packaging to track items. As technology is being widely used, there are several technological or economic issues that should be overcome to facilitate the adoption of this track-and-trace technology. One of the technological challenges of RFID technology is its sensitivity to different environmental form factors, including packaging materials and the content of the packaging. Although researchers have assessed the performance loss due to the proximity of water and aqueous solutions, there is still the need to further investigate the impacts of food products on the reading range of RFID tags. However, to the best of our knowledge, there are not enough studies to determine the correlation between RFID tag performance and food beverages properties. The goal of this project was to investigate the effect of the solution properties (pH and conductivity) and different packaging materials filled with food-like water-based solutions on the performance of an RFID tag. Three commercially available ultra high-frequency RFID tags were placed on three different bottles and filled with different concentrations of water-based solutions, including sodium chloride, citric acid, sucrose, and ethanol. Transparent glass, Polyethylneterephtalate (PET), and Tetrapak® were used as the packaging materials commonly used in the beverage industries. Tag readability (Theoretical Read Range, TRR) and sensitivity (Power on Tag Forward, PoF) were determined using an anechoic chamber. First, the best place to attach the tag for each packaging material was investigated using empty and water-filled bottles. Then, the bottles were filled with the food-like solutions and tested with the three different tags and the PoF and TRR at the fixed frequency of 915MHz. In parallel, the pH and conductivity of solutions were measured. The best-performing tag was then selected to test the bottles filled with wine, orange, and apple juice. Despite various solutions altering the performance of each tag, the change in tag performance had no correlation with the pH or conductivity of the solution. Additionally, packaging material played a significant role in tag performance. Each tag tested performed optimally under different conditions. This study is the first part of comprehensive research to determine the regression model for the prediction of tag performance behavior based on the packaging material and the content. More investigations, including more tags and food products, are needed to be able to develop a robust regression model. The results of this study can be used by RFID tag manufacturers to design suitable tags for specific products with similar properties.Keywords: smart food packaging, supply chain management, food waste, radio frequency identification
Procedia PDF Downloads 1141733 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro
Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich
Abstract:
Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve
Procedia PDF Downloads 1501732 Perception Towards Using E-learning with Stem Students Whose Programs Require Them to Attend Practical Sections in Laboratories during Covid-19
Authors: Youssef A. Yakoub, Ramy M. Shaaban
Abstract:
Covid-19 has changed and affected the whole world dramatically in a new way that the entire world, even scientists, have not imagined before. The educational institutions around the world have been fighting since Covid-19 hit the world last December to keep the educational process unchanged for all students. E-learning was a must for almost all US universities during the pandemic. It was specifically more challenging to use eLearning instead of regular classes among students who take practical education. The aim of this study is to examine the perception of STEM students towards using eLearning instead of traditional methods during their practical study. Focus groups of STEM students studying at a western Pennsylavian, mid-size university were interviewed. Semi-structured interviews were designed to get an insight on students’ perception towards the alternative educational methods they used in the past seven months. Using convenient sampling, four students were chosen from different STEM fields: science of physics, technology, electrical engineering, and mathematics. The interview was primarily about the extent to which these students were satisfied, and their educational needs were met through distance education during the pandemic. The interviewed students were generally able to do a satisfactory performance during their virtual classes, but they were not satisfied enough with the learning methods. The main challenges they faced included the inability to have real practical experience, insufficient materials posted by the faculty, and some technical problems associated with their study. However, they reported they were satisfied with the simulation programs they had. They reported these simulations provided them with a good alternative to their traditional practical education. In conclusion, this study highlighted the challenges students face during the pandemic. It also highlighted the various learning tools students see as good alternatives to their traditional education.Keywords: eLearning, STEM education, COVID-19 crisis, online practical training
Procedia PDF Downloads 1341731 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 1411730 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings
Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti
Abstract:
Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety
Procedia PDF Downloads 4971729 Forecasting Residential Water Consumption in Hamilton, New Zealand
Authors: Farnaz Farhangi
Abstract:
Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model
Procedia PDF Downloads 3371728 Label Free Detection of Small Molecules Using Surface-Enhanced Raman Spectroscopy with Gold Nanoparticles Synthesized with Various Capping Agents
Authors: Zahra Khan
Abstract:
Surface-Enhanced Raman Spectroscopy (SERS) has received increased attention in recent years, focusing on biological and medical applications due to its great sensitivity as well as molecular specificity. In the context of biological samples, there are generally two methodologies for SERS based applications: label-free detection and the use of SERS tags. The necessity of tagging can make the process slower and limits the use for real life. Label-free detection offers the advantage that it reports direct spectroscopic evidence associated with the target molecule rather than the label. Reproducible, highly monodisperse gold nanoparticles (Au NPs) were synthesized using a relatively facile seed-mediated growth method. Different capping agents (TRIS, citrate, and CTAB) were used during synthesis, and characterization was performed. They were then mixed with different analyte solutions before drop-casting onto a glass slide prior to Raman measurements to see which NPs displayed the highest SERS activity as well as their stability. A host of different analytes were tested, both non-biomolecules and biomolecules, which were all successfully detected using this method at concentrations as low as 10-3M with salicylic acid reaching a detection limit in the nanomolar range. SERS was also performed on samples with a mixture of analytes present, whereby peaks from both target molecules were distinctly observed. This is a fast and effective rapid way of testing samples and offers potential applications in the biomedical field as a tool for diagnostic and treatment purposes.Keywords: gold nanoparticles, label free, seed-mediated growth, SERS
Procedia PDF Downloads 1251727 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System
Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock
Abstract:
The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription-Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the Roche assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the Roche assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.Keywords: HIV viral load, Aptima, Roche, Panther system
Procedia PDF Downloads 375