Search results for: optimized closed polygonal segment method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20669

Search results for: optimized closed polygonal segment method

12269 Determination of Rare Earth Element Patterns in Uranium Matrix for Nuclear Forensics Application: Method Development for Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Measurements

Authors: Bernadett Henn, Katalin Tálos, Éva Kováss Széles

Abstract:

During the last 50 years, the worldwide permeation of the nuclear techniques induces several new problems in the environmental and in the human life. Nowadays, due to the increasing of the risk of terrorism worldwide, the potential occurrence of terrorist attacks using also weapon of mass destruction containing radioactive or nuclear materials as e.g. dirty bombs, is a real threat. For instance, the uranium pellets are one of the potential nuclear materials which are suitable for making special weapons. The nuclear forensics mainly focuses on the determination of the origin of the confiscated or found nuclear and other radioactive materials, which could be used for making any radioactive dispersive device. One of the most important signatures in nuclear forensics to find the origin of the material is the determination of the rare earth element patterns (REE) in the seized or found radioactive or nuclear samples. The concentration and the normalized pattern of the REE can be used as an evidence of uranium origin. The REE are the fourteen Lanthanides in addition scandium and yttrium what are mostly found together and really low concentration in uranium pellets. The problems of the REE determination using ICP-MS technique are the uranium matrix (high concentration of uranium) and the interferences among Lanthanides. In this work, our aim was to develop an effective chemical sample preparation process using extraction chromatography for separation the uranium matrix and the rare earth elements from each other following some publications can be found in the literature and modified them. Secondly, our purpose was the optimization of the ICP-MS measuring process for REE concentration. During method development, in the first step, a REE model solution was used in two different types of extraction chromatographic resins (LN® and TRU®) and different acidic media for environmental testing the Lanthanides separation. Uranium matrix was added to the model solution and was proved in the same conditions. Methods were tested and validated using REE UOC (uranium ore concentrate) reference materials. Samples were analyzed by sector field mass spectrometer (ICP-SFMS).

Keywords: extraction chromatography, nuclear forensics, rare earth elements, uranium

Procedia PDF Downloads 290
12268 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle

Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito

Abstract:

Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.

Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks

Procedia PDF Downloads 53
12267 An Ancient Rule for Constructing Dodecagonal Quasi-Periodic Formations

Authors: Rima A. Ajlouni

Abstract:

The discovery of quasi-periodic structures in material science is revealing an exciting new class of symmetries, which has never been explored before. Due to their unique structural and visual properties, these symmetries are drawing interest from many scientific and design disciplines. Especially, in art and architecture, these symmetries can provide a rich source of geometry for exploring new patterns, forms, systems, and structures. However, the structural systems of these complicated symmetries are still posing a perplexing challenge. While much of their local order has been explored, the global governing system is still unresolved. Understanding their unique global long-range order is essential to their generation and application. The recent discovery of dodecagonal quasi-periodic patterns in historical Islamic architecture is generating a renewed interest into understanding the mathematical principles of traditional Islamic geometry. Astonishingly, many centuries before its description in the modern science, ancient artists, by using the most primitive tools (a compass and a straight edge), were able to construct patterns with quasi-periodic formations. These ancient patterns can be found all over the ancient Islamic world, many of which exhibit formations with 5, 8, 10 and 12 quasi-periodic symmetries. Based on the examination of these historical patterns and derived from the generating principles of Islamic geometry, a global multi-level structural model is presented that is able to describe the global long-range order of dodecagonal quasi-periodic formations in Islamic Architecture. Furthermore, this method is used to construct new quasi-periodic tiling systems as well as generating their deflation and inflation rules. This method can be used as a general guiding principle for constructing infinite patches of dodecagon-based quasi-periodic formations, without the need for local strategies (tiling, matching, grid, substitution, etc.) or complicated mathematics; providing an easy tool for scientists, mathematicians, teachers, designers and artists, to generate and study a wide range of dodecagonal quasi-periodic formations.

Keywords: dodecagonal, Islamic architecture, long-range order, quasi-periodi

Procedia PDF Downloads 392
12266 Community Development and Empowerment

Authors: Shahin Marjan Nanaje

Abstract:

The present century is the time that social worker faced complicated issues in the area of their work. All the focus are on bringing change in the life of those that they live in margin or live in poverty became the cause that we have forgotten to look at ourselves and start to bring change in the way we address issues. It seems that there is new area of needs that social worker should response to that. In need of dialogue and collaboration, to address the issues and needs of community both individually and as a group we need to have new method of dialogue as tools to reach to collaboration. The social worker as link between community, organization and government play multiple roles. They need to focus in the area of communication with new ability, to transfer all the narration of the community to those organization and government and vice versa. It is not relate only in language but it is about changing dialogue. Migration for survival by job seeker to the big cities created its own issues and difficulty and therefore created new need. Collaboration is not only requiring between government sector and non-government sectors but also it could be in new way between government, non-government and communities. To reach to this collaboration we need healthy, productive and meaningful dialogue. In this new collaboration there will not be any hierarchy between members. The methodology that selected by researcher were focusing on observation at the first place, and used questionnaire in the second place. Duration of the research was three months and included home visits, group discussion and using communal narrations which helped to bring enough evidence to understand real need of community. The sample selected randomly was included 70 immigrant families which work as sweepers in the slum community in Bangalore, Karnataka. The result reveals that there is a gap between what a community is and what organizations, government and members of society apart from this community think about them. Consequently, it is learnt that to supply any service or bring any change to slum community, we need to apply new skill to have dialogue and understand each other before providing any services. Also to bring change in the life of those marginal groups at large we need to have collaboration as their challenges are collective and need to address by different group and collaboration will be necessary. The outcome of research helped researcher to see the area of need for new method of dialogue and collaboration as well as a framework for collaboration and dialogue that were main focus of the paper. The researcher used observation experience out of ten NGO’s and their activities to create framework for dialogue and collaboration.

Keywords: collaboration, dialogue, community development, empowerment

Procedia PDF Downloads 571
12265 A Mixed-Method Exploration of the Interrelationship between Corporate Governance and Firm Performance

Authors: Chen Xiatong

Abstract:

The study aims to explore the interrelationship between corporate governance factors and firm performance in Mainland China using a mixed-method approach. To clarify the current effectiveness of corporate governance, uncover the complex interrelationships between governance factors and firm performance, and enhance understanding of corporate governance strategies in Mainland China. The research involves quantitative methods like statistical analysis of governance factors and firm performance data, as well as qualitative approaches including policy research, case studies, and interviews with staff members. The study aims to reveal the current effectiveness of corporate governance in Mainland China, identify complex interrelationships between governance factors and firm performance, and provide suggestions for companies to enhance their governance practices. The research contributes to enriching the literature on corporate governance by providing insights into the effectiveness of governance practices in Mainland China and offering suggestions for improvement. Quantitative data will be gathered through surveys and sampling methods, focusing on governance factors and firm performance indicators. Qualitative data will be collected through policy research, case studies, and interviews with staff members. Quantitative data will be analyzed using statistical, mathematical, and computational techniques. Qualitative data will be analyzed through thematic analysis and interpretation of policy documents, case study findings, and interview responses. The study addresses the effectiveness of corporate governance in Mainland China, the interrelationship between governance factors and firm performance, and staff members' perceptions of corporate governance strategies. The research aims to enhance understanding of corporate governance effectiveness, enrich the literature on governance practices, and contribute to the field of business management and human resources management in Mainland China.

Keywords: corporate governance, business management, human resources management, board of directors

Procedia PDF Downloads 44
12264 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany

Authors: Karin Schakib-Ekbatan, Annette Roser

Abstract:

According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.

Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings

Procedia PDF Downloads 114
12263 Optimization of the Energy Consumption of the Pottery Kilns by the Use of Heat Exchanger as Recovery System and Modeling of Heat Transfer by Conduction Through the Walls of the Furnace

Authors: Maha Bakakri, Rachid Tadili, Fatiha Lemmini

Abstract:

Morocco is one of the few countries that have kept their traditional crafts, despite the competition of modern industry and its impact on manual labor. Therefore the optimization of energy consumption becomes an obligation and this is the purpose of this document. In this work we present some characteristics of the furnace studied, its operating principle and the experimental measurements of the evolutions of the temperatures inside and outside the walls of the furnace, values which will be used later in the calculation of its thermal losses. In order to determine the major source of the thermal losses of the furnace we have established the heat balance of the furnace. The energy consumed, the useful energy and the thermal losses through the walls and the chimney of the furnace are calculated thanks to the experimental measurements which we realized for several firings. The results show that the energy consumption of this type of furnace is very high and that the main source of energy loss is mainly due to the heat losses of the combustion gases that escape from the furnace by the chimney while the losses through the walls are relatively small. it have opted for energy recovery as a solution where we can recover some of the heat lost through the use of a heat exchanger system using a double tube introduced into the flue gas exhaust stack compartment. The study on the heat recovery system is presented and the heat balance inside the exchanger is established. In this paper we also present the numerical modeling of heat transfer by conduction through the walls of the furnace. A numerical model has been established based on the finite volume method and the double scan method. It makes it possible to determine the temperature profile of the furnace and thus to calculate the thermal losses of its walls and to deduce the thermal losses due to the combustion gases. Validation of the model is done using the experimental measurements carried out on the furnace. The results obtained in this work, relating to the energy consumed during the operation of the furnace are important and are part of the energy efficiency framework that has become a key element in global energy policies. It is the fastest and cheapest way to solve energy, environmental and economic security problems.

Keywords: energy cunsumption, energy recovery, modeling, energy eficiency

Procedia PDF Downloads 54
12262 Synchronization of a Perturbed Satellite Attitude Motion using Active Sliding Mode Controller

Authors: Djaouida Sadaoui

Abstract:

In this paper, the design procedure of the active sliding mode controller which is a combination of the active controller and the sliding mode controller is given first and then the problem of synchronization of two satellites systems is discussed for the proposed method. Finally, numerical results are presented to evaluate the robustness and effectiveness of the proposed control strategy.

Keywords: active control, sliding mode control, synchronization, satellite attitude

Procedia PDF Downloads 475
12261 In vitro Antioxidant Activity and Total Phenolic Content of Dillenia indica and Garcinia penducalata, Commonly Used Fruits in Assamese Cuisine

Authors: M. Das, B. P. Sarma, G. Ahmed

Abstract:

Human diet can be a major source of antioxidants. Poly¬phenols, which are organic compounds present in the regular human diet, have good antioxidant property. Most of the diseases are detected too late and that cause irre¬versible damage to the body. Therefore food that forms the natural source of antioxidants can prevent free radi¬cals from damaging our body tissues. Dillenia indica and Garcinia penducalata are two major fruits, easily available in Assam, North eastern Indian state. In the present study, the in vitro antioxi¬dant properties of the fruits of these plants are compared as the decoction of these fruits form a major part of Assamese cuisine. DPPH free radical scavenging activity of the methanol, petroleum ether and water extracts of G. penducalata and D. indica fruits were carried out by the methods of Cotelle A et al. (1996). Different concentrations ranging from 10–110 ug/ml of the extracts were added to 100 uM of DPPH (2,2, Diphenyl-2-picryl hydrazyl) and the absor¬bance was read at 517 nm after incubation. Ascorbic acid was used as the standard. Different concentrations of the methanol, petroleum ether and water extracts of G. penducalata and D. indica fruits were mixed with sodium nitroprusside and incubated. Griess reagent was added to the mixtures and their optical density was read at 546 nm following the method of Marcocci et al. (1994). Ascorbic acid was used as the standard. In order to find the scavenging activity of the extracts against hydroxyl radicals, the method of Kunchandy & Ohkawa (1990) was followed.The superoxide scavenging activity of the methanol, petroleum ether and water extracts of the fruits was deter¬mined by the method of Robak & Gryglewski (1998).Six replicates were maintained in each of the experiments and their SEM was evaluated based on which, non linear regres¬sion (curve fit), exponential growth were derived to calculate the IC50 values of the SAWE and standard compounds. All the statistical analyses were done by using paired t test. The hydroxyl radical scavenging activity of the various extracts of D. indica exhibited IC50 values < 110 ug/ml concentration, the scavenging activity of the extracts of G. penducalata was surprisingly>110 ug/ml.Similarly the oxygen free radical scavenging activity of the different extracts of D. indica exhibited an IC50 value of <110 ug/ml but the methanolic extract of the same exhib¬ited a better free radical scavenging activity compared to that of vitamin C. The methanolic extract of D. indica exhibited an IC50 value better than that of vitamin C. The DPPH scavenging activities of the various extracts of D. indica and G. penducalata were <110 ug/ml but the methanolic extract of D. indica exhibited an IC50 value bet¬ter than that of vitaminc C.The higher amounts of phenolic content in the methanolic extract of D. indica might be one of the major causes for its enhanced in vitro antioxidant activity.The present study concludes that Dillenia indica and Garcinia penducalata both possesses anti oxidant activi¬ties. The anti oxidant activity of Dillenia indica is superior to that of Garcinia penducalata due to its higher phenolic content

Keywords: antioxidants, free radicals, phenolic, scavenging

Procedia PDF Downloads 580
12260 Analysis of Post-vaccination Immunity in Children with Severe Chronic Diseases Receiving Immunosuppressive Therapy by Specific IgG Antibodies Definition Method

Authors: Marina G. Galitskaya, Svetlana G. Makarova, Andrey P. Fisenko.

Abstract:

Children on medication-induced immunosuppression are at high risk of developing severe course infectious diseases. Therefore, preventive vaccination is especially important for these children. However, due to the immunosuppressive effects of treatment for the underlying disease, the effectiveness of vaccination may decrease below the protective level. In a multidisciplinary children's medical center, post-vaccination immunity was studied in 79 children aged 4-17 years. The children were divided into 2 groups: Group 1 (38 children) with kidney pathology (Nephrotic Syndrome) and Group 2 (41 children) with inflammatory bowel diseases (Ulcerative Colitis, Crohn's Disease). Both groups of children were vaccinated according to the national vaccination calendar and received immunosuppressive therapy (prednisolone, methotrexate, cyclosporine, and other drugs) for at least 1 year. Using the enzyme-linked immunosorbent assay method, specific IgG antibodies to vaccine-preventable infections were determined: measles, rubella, mumps, diphtheria, pertussis, tetanus, and hepatitis B. The study showed the percentage of children with positive IgG values for vaccine-preventable infections. The highest percentage of children had protective antibody levels to measles (84.2% in children with nephrotic syndrome and 92.6% in those with inflammatory bowel disease) and rubella (71% and 80.4%, respectively). The lowest percentage of children with protective antibodies was for hepatitis B (5.2% and 29.2% respectively). Antibodies to mumps, diphtheria, pertussis, and tetanus were found not in all children (from 39,4% to 82,9%). The remaining percentage of children did not have detectable IgG antibodies to vaccine-preventable infections. Not all children, despite the previous vaccination, preserved antibodies to vaccine-controlled infections and remained unprotected by specific IgG antibodies. The issue of a booster vaccine dose should be considered in children without contraindications to vaccination. Children receiving long-term immunosuppressive therapy require an individual vaccination approach, including a specific definition of the performed vaccination.

Keywords: immunosuppressive therapy, inflammatory bowel diseases, nephrotic syndrome, post-vaccination immunity, specific antibodies, vaccine-preventable infections.

Procedia PDF Downloads 19
12259 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach

Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira

Abstract:

Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.

Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers

Procedia PDF Downloads 140
12258 Synthesis and Characterization of Cyclic PNC-28 Peptide, Residues 17–26 (ETFSDLWKLL), A Binding Domain of p53

Authors: Deepshikha Verma, V. N. Rajasekharan Pillai

Abstract:

The present study reports the synthesis of cyclic PNC-28 peptides with solid-phase peptide synthesis method. In the first step, we synthesize the linear PNC-28 Peptide and in the second step, we cyclize (N-to-C or head-to-tail cyclization) the linear PNC-28 peptide. The molecular formula of cyclic PNC-28 peptide is C64H88N12O16 and its m/z mass is ≈1233.64. Elemental analysis of cyclic PNC-28 is C, 59.99; H, 6.92; N, 13.12; O, 19.98. The characterization of LC-MS, CD, FT-IR, and 1HNMR has been done to confirm the successful synthesis and cyclization of linear PNC-28 peptides.

Keywords: CD, FTIR, 1HNMR, cyclic peptide

Procedia PDF Downloads 118
12257 Thermal Characterisation of Multi-Coated Lightweight Brake Rotors for Passenger Cars

Authors: Ankit Khurana

Abstract:

The sufficient heat storage capacity or ability to dissipate heat is the most decisive parameter to have an effective and efficient functioning of Friction-based Brake Disc systems. The primary aim of the research was to analyse the effect of multiple coatings on lightweight disk rotors surface which not only alleviates the mass of vehicle & also, augments heat transfer. This research is projected to aid the automobile fraternity with an enunciated view over the thermal aspects in a braking system. The results of the project indicate that with the advent of modern coating technologies a brake system’s thermal curtailments can be removed and together with forced convection, heat transfer processes can see a drastic improvement leading to increased lifetime of the brake rotor. Other advantages of modifying the surface of a lightweight rotor substrate will be to reduce the overall weight of the vehicle, decrease the risk of thermal brake failure (brake fade and fluid vaporization), longer component life, as well as lower noise and vibration characteristics. A mathematical model was constructed in MATLAB which encompassing the various thermal characteristics of the proposed coatings and substrate materials required to approximate the heat flux values in a free and forced convection environment; resembling to a real-time braking phenomenon which could easily be modelled into a full cum scaled version of the alloy brake rotor part in ABAQUS. The finite element of a brake rotor was modelled in a constrained environment such that the nodal temperature between the contact surfaces of the coatings and substrate (Wrought Aluminum alloy) resemble an amalgamated solid brake rotor element. The initial results obtained were for a Plasma Electrolytic Oxidized (PEO) substrate wherein the Aluminum alloy gets a hard ceramic oxide layer grown on its transitional phase. The rotor was modelled and then evaluated in real-time for a constant ‘g’ braking event (based upon the mathematical heat flux input and convective surroundings), which reflected the necessity to deposit a conducting coat (sacrificial) above the PEO layer in order to inhibit thermal degradation of the barrier coating prematurely. Taguchi study was then used to bring out certain critical factors which may influence the maximum operating temperature of a multi-coated brake disc by simulating brake tests: a) an Alpine descent lasting 50 seconds; b) an Autobahn stop lasting 3.53 seconds; c) a Six–high speed repeated stop in accordance to FMVSS 135 lasting 46.25 seconds. Thermal Barrier coating thickness and Vane heat transfer coefficient were the two most influential factors and owing to their design and manufacturing constraints a final optimized model was obtained which survived the 6-high speed stop test as per the FMVSS -135 specifications. The simulation data highlighted the merits for preferring Wrought Aluminum alloy 7068 over Grey Cast Iron and Aluminum Metal Matrix Composite in coherence with the multiple coating depositions.

Keywords: lightweight brakes, surface modification, simulated braking, PEO, aluminum

Procedia PDF Downloads 396
12256 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud

Authors: N. Nalini, Bhanu Prakash Gopularam

Abstract:

The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.

Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping

Procedia PDF Downloads 365
12255 A Novel Approach of Secret Communication Using Douglas-Peucker Algorithm

Authors: R. Kiruthika, A. Kannan

Abstract:

Steganography is the problem of hiding secret messages in 'innocent – looking' public communication so that the presence of the secret message cannot be detected. This paper introduces a steganographic security in terms of computational in-distinguishability from a channel of probability distributions on cover messages. This method first splits the cover image into two separate blocks using Douglas – Peucker algorithm. The text message and the image will be hided in the Least Significant Bit (LSB) of the cover image.

Keywords: steganography, lsb, embedding, Douglas-Peucker algorithm

Procedia PDF Downloads 349
12254 An Approach to Wind Turbine Modeling for Increasing Its Efficiency

Authors: Rishikesh Dingari, Sai Kiran Dornala

Abstract:

In this paper, a simple method of achieving maximum power by mechanical energy transmission device (METD) with integration to induction generator is proposed. METD functioning is explained and dynamic response of system to step input is plotted. Induction generator is being operated at self-excited mode with excitation capacitor at stator. Voltage and current are observed when linked to METD.

Keywords: mechanical energy transmitting device(METD), self-excited induction generator, wind turbine, hydraulic actuators

Procedia PDF Downloads 333
12253 Optical Flow Technique for Supersonic Jet Measurements

Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi

Abstract:

This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer

Procedia PDF Downloads 303
12252 A Novel Probablistic Strategy for Modeling Photovoltaic Based Distributed Generators

Authors: Engy A. Mohamed, Y. G. Hegazy

Abstract:

This paper presents a novel algorithm for modeling photovoltaic based distributed generators for the purpose of optimal planning of distribution networks. The proposed algorithm utilizes sequential Monte Carlo method in order to accurately consider the stochastic nature of photovoltaic based distributed generators. The proposed algorithm is implemented in MATLAB environment and the results obtained are presented and discussed.

Keywords: comulative distribution function, distributed generation, Monte Carlo

Procedia PDF Downloads 571
12251 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 117
12250 Highway Waste Management in Zambia Policy Preparedness and Remedies: The Case of Great East Road

Authors: Floyd Misheck Mwanza, Paul Boniface Majura

Abstract:

The paper looked at highways/ roadside waste generation, disposal and the consequent environmental impacts. The dramatic increase in vehicular and paved roads in the recent past in Zambia, has given rise to the indiscriminate disposal of litter that now poses a threat to health and the environment. Primary data was generated by carrying out oral interviews and field observations for holistic and in–depth assessment of the environment and the secondary data was obtained from desk review method, information on effects of roadside wastes on environment were obtained from relevant literatures. The interviews were semi structured and a purposive sampling method was adopted and analyzed descriptively. The results of the findings showed that population growth and unplanned road expansion has exceeded the expected limit in recent time with resultant poor system of roadside wastes disposal. Roadside wastes which contain both biodegradable and non-biodegradable roadside wastes are disposed at the shoulders of major highways in temporary dumpsites and are never collected by a road development agency (RDA). There is no organized highway to highway or street to street collection of the wastes in Zambia by the key organization the RDA. The study revealed that roadside disposal of roadside wastes has serious impacts on the environment. Some of these impacts include physical nuisance of the wastes to the environment, the waste dumps also serve as hideouts for rodents and snakes which are dangerous. Waste are blown around by wind making the environment filthy, most of the wastes are also been washed by overland flow during heavy downpour to block drainage channels and subsequently lead to flooding of the environment. Most of the non- biodegradable wastes contain toxic chemicals which have serious implications on the environmental sustainability and human health. The paper therefore recommends that Government/ RDA should come up with proper orientation and environmental laws should be put in place for the general public and also to provide necessary facilities and arrange for better methods of collection of wastes.

Keywords: biodegradable, disposal, environment, impacts

Procedia PDF Downloads 326
12249 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation

Authors: R. Fardid, R. Coppes

Abstract:

Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.

Keywords: gene expression, heart damage, proton irradiation, radiotherapy

Procedia PDF Downloads 479
12248 A Simple Chemical Precipitation Method of Titanium Dioxide Nanoparticles Using Polyvinyl Pyrrolidone as a Capping Agent and Their Characterization

Authors: V. P. Muhamed Shajudheen, K. Viswanathan, K. Anitha Rani, A. Uma Maheswari, S. Saravana Kumar

Abstract:

In this paper, a simple chemical precipitation route for the preparation of titanium dioxide nanoparticles, synthesized by using titanium tetra isopropoxide as a precursor and polyvinyl pyrrolidone (PVP) as a capping agent, is reported. The Differential Scanning Calorimetry (DSC) and Thermo Gravimetric Analysis (TGA) of the samples were recorded and the phase transformation temperature of titanium hydroxide, Ti(OH)4 to titanium oxide, TiO2 was investigated. The as-prepared Ti(OH)4 precipitate was annealed at 800°C to obtain TiO2 nanoparticles. The thermal, structural, morphological and textural characterizations of the TiO2 nanoparticle samples were carried out by different techniques such as DSC-TGA, X-Ray Diffraction (XRD), Fourier Transform Infra-Red spectroscopy (FTIR), Micro Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence spectroscopy (PL) and Field Effect Scanning Electron Microscopy (FESEM) techniques. The as-prepared precipitate was characterized using DSC-TGA and confirmed the mass loss of around 30%. XRD results exhibited no diffraction peaks attributable to anatase phase, for the reaction products, after the solvent removal. The results indicate that the product is purely rutile. The vibrational frequencies of two main absorption bands of prepared samples are discussed from the results of the FTIR analysis. The formation of nanosphere of diameter of the order of 10 nm, has been confirmed by FESEM. The optical band gap was found by using UV-Visible spectrum. From photoluminescence spectra, a strong emission was observed. The obtained results suggest that this method provides a simple, efficient and versatile technique for preparing TiO2 nanoparticles and it has the potential to be applied to other systems for photocatalytic activity.

Keywords: TiO2 nanoparticles, chemical precipitation route, phase transition, Fourier Transform Infra-Red spectroscopy (FTIR), micro-Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence Spectroscopy (PL) and Field Effect Scanning electron microscopy (FESEM)

Procedia PDF Downloads 308
12247 Diagnosis of Choledocholithiasis with Endosonography

Authors: A. Kachmazova, A. Shadiev, Y. Teterin, P. Yartcev

Abstract:

Introduction: Biliary calculi disease (LCS) still occupies the leading position among urgent diseases of the abdominal cavity, manifesting itself from asymptomatic course to life-threatening states. Nowadays arsenal of diagnostic methods for choledocholithiasis is quite wide: ultrasound, hepatobiliscintigraphy (HBSG), magnetic resonance imaging (MRI), endoscopic retrograde cholangiography (ERCP). Among them, transabdominal ultrasound (TA ultrasound) is the most accessible and routine diagnostic method. Nowadays ERCG is the "gold" standard in diagnosis and one-stage treatment of biliary tract obstruction. However, transpapillary techniques are accompanied by serious postoperative complications (postmanipulative pancreatitis (3-5%), endoscopic papillosphincterotomy bleeding (2%), cholangitis (1%)), the lethality being 0.4%. GBSG and MRI are also quite informative methods in the diagnosis of choledocholithiasis. Small size of concrements, their localization in intrapancreatic and retroduodenal part of common bile duct significantly reduces informativity of all diagnostic methods described above, that demands additional studying of this problem. Materials and Methods: 890 patients with the diagnosis of cholelithiasis (calculous cholecystitis) were admitted to the Sklifosovsky Scientific Research Institute of Hospital Medicine in the period from August, 2020 to June, 2021. Of them 115 people with mechanical jaundice caused by concrements in bile ducts. Results: Final EUS diagnosis was made in all patients (100,0%). In all patients in whom choledocholithiasis diagnosis was revealed or confirmed after EUS, ERCP was performed urgently (within two days from the moment of its detection) as the X-ray operation room was provided; it confirmed the presence of concrements. All stones were removed by lithoextraction using Dormia basket. The postoperative period in these patients had no complications. Conclusions: EUS is the most informative and safe diagnostic method, which allows to detect choledocholithiasis in patients with discrepancies between clinical-laboratory and instrumental methods of diagnosis in shortest time, that in its turn will help to decide promptly on the further tactics of patient treatment. We consider it reasonable to include EUS in the diagnostic algorithm for choledocholithiasis. Disclosure: Nothing to disclose.

Keywords: endoscopic ultrasonography, choledocholithiasis, common bile duct, concrement, ERCP

Procedia PDF Downloads 73
12246 The Effect of MOOC-Based Distance Education in Academic Engagement and Its Components on Kerman University Students

Authors: Fariba Dortaj, Reza Asadinejad, Akram Dortaj, Atena Baziyar

Abstract:

The aim of this study was to determine the effect of distance education (based on MOOC) on the components of academic engagement of Kerman PNU. The research was quasi-experimental method that cluster sampling with an appropriate volume was used in this study (one class in experimental group and one class in controlling group). Sampling method is single-stage cluster sampling. The statistical society is students of Kerman Payam Noor University, which) were selected 40 of them as sample (20 students in the control group and 20 students in experimental group). To test the hypothesis, it was used the analysis of univariate and Co-covariance to offset the initial difference (difference of control) in the experimental group and the control group. The instrument used in this study is academic engagement questionnaire of Zerang (2012) that contains component of cognitive, behavioral and motivational engagement. The results showed that there is no significant difference between mean scores of academic components of academic engagement in experimental group and the control group on the post-test, after elimination of the pre-test. The adjusted mean scores of components of academic engagement in the experimental group were higher than the adjusted average of scores after the test in the control group. The use of technology-based education in distance education has been effective in increasing cognitive engagement, motivational engagement and behavioral engagement among students. Experimental variable with the effect size 0.26, predicted 26% of cognitive engagement component variance. Experimental variable with the effect size 0.47, predicted 47% of the motivational engagement component variance. Experimental variable with the effect size 0.40, predicted 40% of behavioral engagement component variance. So teaching with technology (MOOC) has a positive impact on increasing academic engagement and academic performance of students in educational technology. The results suggest that technology (MOOC) is used to enrich the teaching of other lessons of PNU.

Keywords: educational technology, distance education, components of academic engagement, mooc technology

Procedia PDF Downloads 135
12245 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 196
12244 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 370
12243 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation

Authors: Montree Bunruanses, Preecha Yupapin

Abstract:

In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.

Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate

Procedia PDF Downloads 55
12242 Electrochemical Inactivation of Toxic Cyanobacteria and Degradation of Cyanotoxins

Authors: Belal Bakheet, John Beardall, Xiwang Zhang, David McCarthy

Abstract:

The potential risks associated with toxic cyanobacteria have raised growing environmental and public health concerns leading to an increasing effort into researching ways to bring about their removal from water, together with destruction of their associated cyanotoxins. A variety of toxins are synthesized by cyanobacteria and include hepatotoxins, neurotoxins, and cytotoxins which can cause a range of symptoms in humans from skin irritation to serious liver and nerve damage. Therefore drinking water treatment processes should ensure the consumers’ safety by removing both cyanobacterial cells, and cyanotoxins from the water. Cyanobacterial cells and cyanotoxins presented challenges to the conventional water treatment systems; their accumulation within drinking water treatment plants has been reported leading to plants shut down. Thus, innovative and effective water purification systems to tackle cyanobacterial pollution are required. In recent years there has been increasing attention to the electrochemical oxidation process as a feasible alternative disinfection method which is able to generate in situ a variety of oxidants that would achieve synergistic effects in the water disinfection process and toxin degradation. By utilizing only electric current, the electrochemical process through electrolysis can produce reactive oxygen species such as hydroxyl radicals from the water, or other oxidants such as chlorine from chloride ions present in the water. From extensive physiological and morphological investigation of cyanobacterial cells during electrolysis, our results show that these oxidants have significant impact on cell inactivation, simultaneously with cyanotoxins removal without the need for chemicals addition. Our research aimed to optimize existing electrochemical oxidation systems and develop new systems to treat water containing toxic cyanobacteria and cyanotoxins. The research covers detailed mechanism study on oxidants production and cell inactivation in the treatment under environmental conditions. Overall, our study suggests that the electrochemical treatment process e is an effective method for removal of toxic cyanobacteria and cyanotoxins.

Keywords: toxic cyanobacteria, cyanotoxins, electrochemical process, oxidants

Procedia PDF Downloads 217
12241 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor

Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang

Abstract:

Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.

Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry

Procedia PDF Downloads 326
12240 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 88