Search results for: blocked-off solution procedure
1639 Transformer Life Enhancement Using Dynamic Switching of Second Harmonic Feature in IEDs
Authors: K. N. Dinesh Babu, P. K. Gargava
Abstract:
Energization of a transformer results in sudden flow of current which is an effect of core magnetization. This current will be dominated by the presence of second harmonic, which in turn is used to segregate fault and inrush current, thus guaranteeing proper operation of the relay. This additional security in the relay sometimes obstructs or delays differential protection in a specific scenario, when the 2nd harmonic content was present during a genuine fault. This kind of scenario can result in isolation of the transformer by Buchholz and pressure release valve (PRV) protection, which is acted when fault creates more damage in transformer. Such delays involve a huge impact on the insulation failure, and chances of repairing or rectifying fault of problem at site become very dismal. Sometimes this delay can cause fire in the transformer, and this situation becomes havoc for a sub-station. Such occurrences have been observed in field also when differential relay operation was delayed by 10-15 ms by second harmonic blocking in some specific conditions. These incidences have led to the need for an alternative solution to eradicate such unwarranted delay in operation in future. Modern numerical relay, called as intelligent electronic device (IED), is embedded with advanced protection features which permit higher flexibility and better provisions for tuning of protection logic and settings. Such flexibility in transformer protection IEDs, enables incorporation of alternative methods such as dynamic switching of second harmonic feature for blocking the differential protection with additional security. The analysis and precautionary measures carried out in this case, have been simulated and discussed in this paper to ensure that similar solutions can be adopted to inhibit analogous issues in future.Keywords: differential protection, intelligent electronic device (IED), 2nd harmonic inhibit, inrush inhibit
Procedia PDF Downloads 3001638 The Prospect of Income Contingent Loan in Malaysia Higher Education Financing Using Deterministic and Stochastic Methods in Modelling Income
Authors: Syaza Isma, Timothy Higgins
Abstract:
In Malaysia, increased take-up rates of tertiary student borrowing, and reliance on retirement savings to fund children's education show the importance of public higher education financing schemes (PTPTN). PTPTN has been operating for 2 decades now; however, there are some critical issues and challenges that include low loan recovery and loan default that suggest a detailed consideration of student loan/financing scheme alternatives is crucial. In addition, the decline in funding level per student following introduction of the new PTPTN full and partial loan scheme has raised ongoing concerns over the sustainability of the scheme to provide continuous financial assistance to students in tertiary education. This research seeks to assess these issues that put greater efficiency in an effort to ensure equitable access to student funding for current and future generations. We explore the extent of repayment hardship under the current loan arrangements that presumably led to low recovery from the borrowers, particularly low-income graduates. The concept of manageable debt exists in the design of income-contingent repayment schemes, as practiced in Australia, New Zealand, UK, Hungary, USA (in limited form), the Netherlands, and South Korea. Can Income Contingent Loans (ICL) offer the best practice for an education financing scheme, and address the issue of repayment hardship and concurrently, can a properly designed ICL scheme provide a solution to the current issues and challenges facing Malaysia student financing? We examine the different potential ICL models using deterministic and stochastic approach to simulate income of graduates.Keywords: deterministic, income contingent loan, repayment burden, simulation, stochastic
Procedia PDF Downloads 2311637 Development of Natural Zeolites Adsorbent: Preliminary Study on Water-Isopropyl Alcohol Adsorption in a Close-Loop Continuous Adsorber
Authors: Sang Kompiang Wirawan, Pandu Prabowo Jati, I Wayan Warmada
Abstract:
Klaten Indonesian natural zeolite can be used as powder or pellet adsorbent. Pellet adsorbent has been made from activated natural zeolite powder by a conventional pressing method. Starch and formaldehyde were added as binder to strengthen the construction of zeolite pellet. To increase the absorptivity and its capacity, natural zeolite was activated first chemically and thermally. This research examined adsorption process of water from Isopropyl Alcohol (IPA)-water system using zeolite adsorbent pellet from natural zeolite powder which has been activated with H2SO4 0.1 M and 0.3 M. Adsorbent was pelleted by pressing apparatus at certain pressure to make specification in 1.96 cm diameter, 0.68 cm thickness which the natural zeolite powder (-80 mesh). The system of isopropyl-alcohol water contained 80% isopropyl-alcohol. Adsorption process was held in close-loop continuous apparatus which the zeolite pellet was put inside a column and the solution of IPA-water was circulated at certain flow. Concentration changing was examined thoroughly at a certain time. This adsorption process included mass transfer from bulk liquid into film layer and from film layer into the solid particle. Analysis of rate constant was using first order isotherm model that simulated with MATLAB. Besides using first order isotherm, intra-particle diffusion model was proposed by using pore diffusion model. The study shows that adsorbent activated by H2SO4 0.1 M has good absorptivity with mass transfer constant at 0.1286 min-1.Keywords: intra-particle diffusion, fractional attainment, first order isotherm, zeolite
Procedia PDF Downloads 3111636 Dynamics and Advection in a Vortex Parquet on the Plane
Authors: Filimonova Alexanra
Abstract:
Inviscid incompressible fluid flows are considered. The object of the study is a vortex parquet – a structure consisting of distributed vortex spots of different directions, occupying the entire plane. The main attention is paid to the study of advection processes of passive particles in the corresponding velocity field. The dynamics of the vortex structures is considered in a rectangular region under the assumption that periodic boundary conditions are imposed on the stream function. Numerical algorithms are based on the solution of the initial-boundary value problem for nonstationary Euler equations in terms of vorticity and stream function. For this, the spectral-vortex meshless method is used. It is based on the approximation of the stream function by the Fourier series cut and the approximation of the vorticity field by the least-squares method from its values in marker particles. A vortex configuration, consisting of four vortex patches is investigated. Results of a numerical study of the dynamics and interaction of the structure are presented. The influence of the patch radius and the relative position of positively and negatively directed patches on the processes of interaction and mixing is studied. The obtained results correspond to the following possible scenarios: the initial configuration does not change over time; the initial configuration forms a new structure, which is maintained for longer times; the initial configuration returns to its initial state after a certain period of time. The processes of mass transfer of vorticity by liquid particles on a plane were calculated and analyzed. The results of a numerical analysis of the particles dynamics and trajectories on the entire plane and the field of local Lyapunov exponents are presented.Keywords: ideal fluid, meshless methods, vortex structures in liquids, vortex parquet.
Procedia PDF Downloads 651635 The Conception of Implementation of Vision for European Forensic Science 2020 in Lithuania
Authors: Eglė Bilevičiūtė, Vidmantas Egidijus Kurapka, Snieguolė Matulienė, Sigutė Stankevičiūtė
Abstract:
The Council of European Union (EU Council) has stressed on several occasions the need for a concerted, comprehensive and effective solution to delinquency problems in EU communities. In the context of establishing a European Forensic Science Area and the development of forensic science infrastructure in Europe, EU Council believes that forensic science can significantly contribute to the efficiency of law enforcement, crime prevention and combating crimes. Lithuanian scientists have consolidated to implement a project named “Conception of the vision for European Forensic Science 2020 implementation in Lithuania” (the project is funded for the period of 1 March 2014 - 31 December 2016) with the objective to create a conception of implementation of the vision for European Forensic Science 2020 in Lithuania by 1) evaluating the current status of Lithuania’s forensic system and opportunities for its improvement; 2) analysing achievements and knowledge in investigation of crimes listed in conclusions of EU Council on the vision for European Forensic Science 2020 including creation of a European Forensic Science Area and the development of forensic science infrastructure in Europe: trafficking in human beings, organised crime and terrorism; 3) analysing conceptions of criminalistics, which differ in different EU member states due to the variety of forensic schools, and finding means for their harmonization. Apart from the conception of implementation of the vision for European Forensic Science 2020 in Lithuania, the project is expected to suggest provisions that will be relevant to other EU countries as well. Consequently, the presented conception of implementation of vision for European Forensic Science 2020 in Lithuania could initiate a project for a common vision of European Forensic Science and contribute to the development of the EU as an area of freedom, security and justice. The article presents main ideas of the project of the conception of the vision for European Forensic Science 2020 of EU Council and analyses its legal background, as well as prospects of and challenges for its implementation in Lithuania and the EU.Keywords: EUROVIFOR, standardization, vision for European Forensic Science 2020, Lithuania
Procedia PDF Downloads 4121634 The Fundamental Research and Industrial Application on CO₂+O₂ in-situ Leaching Process in China
Authors: Lixin Zhao, Genmao Zhou
Abstract:
Traditional acid in-situ leaching (ISL) is not suitable for the sandstone uranium deposit with low permeability and high content of carbonate minerals, because of the blocking of calcium sulfate precipitates. Another factor influences the uranium acid in-situ leaching is that the pyrite in ore rocks will react with oxidation reagent and produce lots of sulfate ions which may speed up the precipitation process of calcium sulphate and consume lots of oxidation reagent. Due to the advantages such as less chemical reagent consumption and groundwater pollution, CO₂+O₂ in-situ leaching method has become one of the important research areas in uranium mining. China is the second country where CO₂+O₂ ISL has been adopted in industrial uranium production of the world. It is shown that the CO₂+O₂ ISL in China has been successfully developed. The reaction principle, technical process, well field design and drilling engineering, uranium-bearing solution processing, etc. have been fully studied. At current stage, several uranium mines use CO₂+O₂ ISL method to extract uranium from the ore-bearing aquifers. The industrial application and development potential of CO₂+O₂ ISL method in China are summarized. By using CO₂+O₂ neutral leaching technology, the problem of calcium carbonate and calcium sulfate precipitation have been solved during uranium mining. By reasonably regulating the amount of CO₂ and O₂, related ions and hydro-chemical conditions can be controlled within the limited extent for avoiding the occurrence of calcium sulfate and calcium carbonate precipitation. Based on this premise, the demand of CO₂+O₂ uranium leaching has been met to the maximum extent, which not only realizes the effective leaching of uranium, but also avoids the occurrence and precipitation of calcium carbonate and calcium sulfate, realizing the industrial development of the sandstone type uranium deposit.Keywords: CO₂+O₂ ISL, industrial production, well field layout, uranium processing
Procedia PDF Downloads 1781633 Electronic Commerce in Georgia: Problems and Development Perspectives
Authors: Nika GorgoShadze, Anri Shainidze, Bachuki Katamadze
Abstract:
In parallel to the development of the digital economy in the world, electronic commerce is also widely developing. Internet and ICT (information and communication technology) have created new business models as well as promoted to market consolidation, sustainability of the business environment, creation of digital economy, facilitation of business and trade, business dynamism, higher competitiveness, etc. Electronic commerce involves internet technology which is sold via the internet. Nowadays electronic commerce is a field of business which is used by leading world brands very effectively. After the research of internet market in Georgia, it was found out that quality of internet is high in Tbilisi and is low in the regions. The internet market of Tbilisi can be evaluated as high-speed internet service, competitive and cost effective internet market. Development of electronic commerce in Georgia is connected with organizational and methodological as well as legal problems. First of all, a legal framework should be developed which will regulate responsibilities of organizations. The Ministry of Economy and Sustainable Development will play a crucial role in creating legal framework. Ministry of Justice will also be involved in this process as well as agency for data exchange. Measures should be taken in order to make electronic commerce in Georgia easier. Business companies may be offered some model to get low-cost and complex service. A service centre should be created which will provide all kinds of online-shopping. This will be a rather interesting innovation which will facilitate online-shopping in Georgia. Development of electronic business in Georgia requires modernized infrastructure of telecommunications (especially in the regions) as well as solution of institutional and socio-economic problems. Issues concerning internet availability and computer skills are also important.Keywords: electronic commerce, internet market, electronic business, information technology, information society, electronic systems
Procedia PDF Downloads 3841632 Facilitating Active Reading Strategies through Caps Chart to Foster Elementary EFL Learners’ Reading Skills and Reading Competency
Authors: Michelle Bulawan, Mei-Hua Chen
Abstract:
Reading comprehension is crucial for acquiring information, analyzing critically, and achieving academic proficiency. However, there is a lack of growth in reading comprehension skills beyond fourth grade. The developmental shift from "learning to read" to "reading to learn" occurs around this stage. Factual knowledge and diverse views in articles enhance reading comprehension abilities. Nevertheless, some face difficulties due to evolving textual requirements, such as expanding vocabulary and using longer, more complex terminology. Most research on reading strategies has been conducted at the tertiary and secondary levels, while few have focused on the elementary levels. Furthermore, the use of character, ask, problem, solution (CAPS) charts in teaching reading has also been hardly explored. Thus, the researcher decided to explore the facilitation of active reading strategies through the CAPS chart and address the following research questions: a) What differences existed in elementary EFL learners' reading competency among those who engaged in active reading strategies and those who did not? b) What are the learners’ metacognitive skills of those who engage in active reading strategies and those who do not, and what are their effects on their reading competency? c) For those participants who engage in active reading activities, what are their perceptions about incorporating active reading activities into their English classroom learning? Two groups of elementary EFL learners, each with 18 students of the same level of English proficiency, participated in this study. Group A served as the control group, while Group B served as the experimental group. Two teachers also participated in this research; one of them was the researcher who handled the experimental group. The treatment lasts for one whole semester or seventeen weeks. In addition to the CAPS chart, the researcher also used the metacognitive awareness of reading strategy inventory (MARSI) and a ten-item, five-point Likert scale survey.Keywords: active reading, EFL learners, metacognitive skills, reading competency, student’s perception
Procedia PDF Downloads 931631 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 1011630 Synthesis and Characterization of CNPs Coated Carbon Nanorods for Cd2+ Ion Adsorption from Industrial Waste Water and Reusable for Latent Fingerprint Detection
Authors: Bienvenu Gael Fouda Mbanga
Abstract:
This study reports a new approach of preparation of carbon nanoparticles coated cerium oxide nanorods (CNPs/CeONRs) nanocomposite and reusing the spent adsorbent of Cd2+- CNPs/CeONRs nanocomposite for latent fingerprint detection (LFP) after removing Cd2+ ions from aqueous solution. CNPs/CeONRs nanocomposite was prepared by using CNPs and CeONRs with adsorption processes. The prepared nanocomposite was then characterized by using UV-visible spectroscopy (UV-visible), Fourier transforms infrared spectroscopy (FTIR), X-ray diffraction pattern (XRD), scanning electron microscope (SEM), Transmission electron microscopy (TEM), Energy-dispersive X-ray spectroscopy (EDS), Zeta potential, X-ray photoelectron spectroscopy (XPS). The average size of the CNPs was 7.84nm. The synthesized CNPs/CeONRs nanocomposite has proven to be a good adsorbent for Cd2+ removal from water with optimum pH 8, dosage 0. 5 g / L. The results were best described by the Langmuir model, which indicated a linear fit (R2 = 0.8539-0.9969). The adsorption capacity of CNPs/CeONRs nanocomposite showed the best removal of Cd2+ ions with qm = (32.28-59.92 mg/g), when compared to previous reports. This adsorption followed pseudo-second order kinetics and intra particle diffusion processes. ∆G and ∆H values indicated spontaneity at high temperature (40oC) and the endothermic nature of the adsorption process. CNPs/CeONRs nanocomposite therefore showed potential as an effective adsorbent. Furthermore, the metal loaded on the adsorbent Cd2+- CNPs/CeONRs has proven to be sensitive and selective for LFP detection on various porous substrates. Hence Cd2+-CNPs/CeONRs nanocomposite can be reused as a good fingerprint labelling agent in LFP detection so as to avoid secondary environmental pollution by disposal of the spent adsorbent.Keywords: Cd2+-CNPs/CeONRs nanocomposite, cadmium adsorption, isotherm, kinetics, thermodynamics, reusable for latent fingerprint detection
Procedia PDF Downloads 1211629 Identification of Clay Mineral for Determining Reservoir Maturity Levels Based on Petrographic Analysis, X-Ray Diffraction and Porosity Test on Penosogan Formation Karangsambung Sub-District Kebumen Regency Central Java
Authors: Ayu Dwi Hardiyanti, Bernardus Anggit Winahyu, I. Gusti Agung Ayu Sugita Sari, Lestari Sutra Simamora, I. Wayan Warmada
Abstract:
The Penosogan Formation sandstone, that has Middle Miosen age, has been deemed as a reservoir potential based on sample data from sandstone outcrop in Kebakalan and Kedawung villages, Karangsambung sub-district, Kebumen Regency, Central Java. This research employs the following analytical methods; petrography, X-ray diffraction (XRD), and porosity test. Based on the presence of micritic sandstone, muddy micrite, and muddy sandstone, the Penosogan Formation sandstone has a fine-coarse granular size and middle-to-fine sorting. The composition of the sandstone is mostly made up of plagioclase, skeletal grain, and traces of micrite. The percentage of clay minerals based on petrographic analysis is 10% and appears to envelop grain, resulting enveloping grain which reduces the porosity of rocks. The porosity types as follows: interparticle, vuggy, channel, and shelter, with an equant form of cement. Moreover, the diagenesis process involves compaction, cementation, authigenic mineral growth, and dissolving due to feldspar alteration. The maturity of the reservoir can be seen through the X-ray diffraction analysis results, using ethylene glycol solution for clay minerals fraction transformed from smectite–illite. Porosity test analysis showed that the Penosogan Formation sandstones has a porosity value of 22% based on the Koeseomadinata classification, 1980. That shows high maturity is very influential for the quality of reservoirs sandstone of the Penosogan Formation.Keywords: sandstone reservoir, Penosogan Formation, smectite, XRD
Procedia PDF Downloads 1771628 Evaluation of Life Cycle Assessment in Furniture Manufacturing by Analytical Hierarchy Process
Authors: Majid Azizi, Payam Ghorbannezhad, Mostafa Amiri, Mohammad Ghofrani
Abstract:
Environmental issues in the furniture industry are of great importance due to the use of natural materials such as wood and chemical substances like adhesives and paints. These issues encompass environmental conservation and managing pollution and waste generated. Improper use of wood resources, along with the use of chemicals and their release, leads to the depletion of natural resources, damage to forests, and the emission of greenhouse gases. Therefore, identifying influential indicators in the life cycle assessment of classic furniture and proposing solutions to reduce environmental impacts becomes crucial. In this study, the life cycle of classic furniture was evaluated using a hierarchical analytical process from cradle to grave. The life cycle assessment was employed to assess the environmental impacts of the furniture industry, ranging from raw material extraction to waste disposal and recycling. The most significant indicators in the furniture industry's production chain were also identified. The results indicated that the wood quality indicator is the most essential factor in the life cycle of classic furniture. Furthermore, the relative contribution of each type of traditional furniture was proposed concerning impact categories in the life cycle assessment. The results showed that among the three proposed types, the design and production of furniture with prefabricated parts had the most negligible impact in categories such as global warming potential and ozone layer depletion compared to furniture design with solid wood and furniture design with recycled components. Among the three suggested types of furniture to reduce environmental impacts, producing furniture with solid wood or other woods was chosen as the most crucial solution.Keywords: life cycle assessment, analytic hierarchy process, environmental issues, furniture
Procedia PDF Downloads 651627 Municipal Solid Waste (MSW) Composition and Generation in Nablus City, Palestine
Authors: Issam A. Al-Khatib
Abstract:
In order to achieve a significant reduction of waste amount flowing into landfills, it is important to first understand the composition of the solid municipal waste generated. Hence a detailed analysis of municipal solid waste composition has been conducted in Nablus city. The aim is to provide data on the potential recyclable fractions in the actual waste stream, with a focus on the plastic fraction. Hence, waste-sorting campaigns were conducted on mixed waste containers from five districts in Nablus city. The districts vary in terms of infrastructure and average income. The target is to obtain representative data about the potential quantity and quality of household plastic waste. The study has measured the composition of municipal solid waste collected/ transported by Nablus municipality. The analysis was done by categorizing the samples into eight primary fractions (organic and food waste, paper and cardboard, glass, metals, textiles, plastic, a fine fraction (<10 mm), and others). The study results reveal that the MSW stream in Nablus city has a significant bio- and organic waste fraction (about 68% of the total MSW). The second largest fraction is paper and cardboard (13.6%), followed by plastics (10.1%), textiles (3.2%), glass (1.9%), metals (1.8%), a fine fraction (0.5%), and other waste (0.3%). After this complete and detailed characterization of MSW collected in Nablus and taking into account the content of biodegradable organic matter, the composting could be a solution for the city of Nablus where the surrounding areas of Nablus city have agricultural activities and could be a natural outlet to the compost product. Different waste management options could be practiced in the future in addition to composting, such as energy recovery and recycling, which result in a greater possibility of reducing substantial amounts that are disposed of at landfills.Keywords: developing countries, composition, management, recyclable, waste.
Procedia PDF Downloads 911626 The Addition of Opioids to Bupivacaine in Bilateral Infraorbital Nerve Block for Postoperative Pain Relief in Paediatric Patients for Cleft Lip Repair-Comparative Effects of Pethidine and Fentanyl: A Prospective Randomized Double Blind Study
Authors: Mrudula Kudtarkar, Rajesh Mane
Abstract:
Introduction: Cleft lip repair is one of the common surgeries performed in India and the usual method used for post-operative analgesia is perioperative opioids and NSAIDs. There has been an increase in use of regional techniques and Opioids are the common adjuvants but their efficacy and safety have not been studied extensively in children. Aim: A prospective, randomized, double-blind study was done to compare the efficacy, duration and safety of intraoral infraorbital nerve block on post-operative pain relief using bupivacaine alone or in combination with fentanyl or pethidine in paediatric cleft lip repair. Methodology: 45 children between the age group 5 – 60 months undergoing cleft lip surgery randomly allocated into 3 groups of 15 each received bilateral intraoral infraorbital nerve block with 0.75ml of solution. Group B received 0.25% bupivacaine; group P received 0.25% bupivacaine with 0.25mg/kg pethidine, group F received 0.25% bupivacaine with 0.25microgm/kg fentanyl. Sedation after recovery, post-operative pain intensity and duration of post-operative analgesia were assessed using Modified Hannallah Pain Score. Results: The mean duration of analgesia was 17.8 hrs in Group B, 23.53 hrs in Group F and 35.13 hrs in Group P. There was statistically significant difference between the means of the three groups- ANOVA (p < 0.05). Conclusion: Thus we conclude that addition of fentanyl or pethidine to bupivacaine for Bilateral Intraoral Infraorbital Nerve Block prolong the duration of analgesia with no complications and can be used safely in paediatric patients.Keywords: cleft lip, infraorbital block, NSAIDS, Opiods
Procedia PDF Downloads 2381625 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 1861624 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study
Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos
Abstract:
This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.Keywords: in-place devices, IoT, human-centred data-analytics, spatial design
Procedia PDF Downloads 1981623 The Effect of Filter Cake Powder on Soil Stability Enhancement in Active Sand Dunes, In the Long and Short Term
Authors: Irit Rutman Halili, Tehila Zvulun, Natali Elgabsi, Revaya Cohen, Shlomo Sarig
Abstract:
Active sand dunes (ASD) may cause significant damage to field crops and livelihood, and therefore, it is necessary to find a treatment that would enhance ADS soil stability. Biological soil crusts (biocrusts) contain microorganisms on the soil surface. Metabolic polysaccharides secreted by biocrust cyanobacteria glue the soil particles into aggregates, thereby stabilizing the soil surface. Filter cake powder (FCP) is a waste by-product in the final stages of the production of sugar from sugarcane, and its disposal causes significant environmental pollution. FCP contains high concentrations of polysaccharides and has recently been shown to be soil stability enhancing agent in ASD. It has been reported that adding FCP to the ASD soil surface by dispersal significantly increases the level of penetration resistance of soil biocrust (PRSB) nine weeks after a single treatment. However, it was not known whether a similar effect could be obtained by administering the FCP in liquid form by means of spraying. It has now been found that spraying a water solution of FCP onto the ASD soil surface significantly increased the level of penetration resistance of soil biocrust (PRSB) three weeks after a single treatment. These results suggest that FCP spraying can be used as a short-term soil stability-enhancing agent for ASD, while administration by dispersal might be more efficient over the long term. Finally, an additional benefit of using FCP as a soil stabilizer, either by dispersal or by spraying, is the reduction in environmental pollution that would otherwise result from the disposal of FCP solid waste.Keywords: active sand dunes, filter cake powder, biological soil crusts, penetration resistance of soil biocrust
Procedia PDF Downloads 1681622 Modeling of Drug Distribution in the Human Vitreous
Authors: Judith Stein, Elfriede Friedmann
Abstract:
The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body
Procedia PDF Downloads 1391621 Impact of Map Generalization in Spatial Analysis
Authors: Lin Li, P. G. R. N. I. Pussella
Abstract:
When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.Keywords: generalization, GIS, scales, spatial analysis
Procedia PDF Downloads 3301620 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 3081619 Developing an Edutainment Game for Children with ADHD Based on SAwD and VCIA Model
Authors: Bruno Gontijo Batista
Abstract:
This paper analyzes how the Socially Aware Design (SAwD) and the Value-oriented and Culturally Informed Approach (VCIA) design model can be used to develop an edutainment game for children with Attention Deficit Hyperactivity Disorder (ADHD). The SAwD approach seeks a design that considers new dimensions in human-computer interaction, such as culture, aesthetics, emotional and social aspects of the user's everyday experience. From this perspective, the game development was VCIA model-based, including the users in the design process through participatory methodologies, considering their behavioral patterns, culture, and values. This is because values, beliefs, and behavioral patterns influence how technology is understood and used and the way it impacts people's lives. This model can be applied at different stages of design, which goes from explaining the problem and organizing the requirements to the evaluation of the prototype and the final solution. Thus, this paper aims to understand how this model can be used in the development of an edutainment game for children with ADHD. In the area of education and learning, children with ADHD have difficulties both in behavior and in school performance, as they are easily distracted, which is reflected both in classes and on tests. Therefore, they must perform tasks that are exciting or interesting for them, once the pleasure center in the brain is activated, it reinforces the center of attention, leaving the child more relaxed and focused. In this context, serious games have been used as part of the treatment of ADHD in children aiming to improve focus and attention, stimulate concentration, as well as be a tool for improving learning in areas such as math and reading, combining education and entertainment (edutainment). Thereby, as a result of the research, it was developed, in a participatory way, applying the VCIA model, an edutainment game prototype, for a mobile platform, for children between 8 and 12 years old.Keywords: ADHD, edutainment, SAwD, VCIA
Procedia PDF Downloads 1921618 Photocatalytic Degradation of Methylene Blue Dye Using Cuprous Oxide/Graphene Nanocomposite
Authors: Bekan Bogale, Tsegaye Girma Asere, Tilahun Yai, Fekadu Melak
Abstract:
Aims: To study photocatalytic degradation of methylene blue dye on cuprous oxide/graphene nanocomposite. Background: Cuprous oxide (Cu2O) nanoparticles are among the metal oxides that demonstrated photocatalytic activity. However, the stability of Cu2O nanoparticles due to the fast recombination rate of electron/hole pairs remains a significant challenge in their photocatalytic applications. This, in turn, leads to mismatching of the effective bandgap separation, tending to reduce the photocatalytic activity of the desired organic waste (MB). To overcome these limitations, graphene has been combined with cuprous oxides, resulting in cuprous oxide/graphene nanocomposite as a promising photocatalyst. Objective: In this study, Cu2O/graphene nanocomposite was synthesized and evaluated for its photocatalytic performance of methylene blue (MB) dye degradation. Method: Cu2O/graphene nanocomposites were synthesized from graphite powder and copper nitrate using the facile sol-gel method. Batch experiments have been conducted to assess the applications of the nanocomposites for MB degradation. Parameters such as contact time, catalyst dosage, and pH of the solution were optimized for maximum MB degradation. The prepared nanocomposites were characterized by using UV-Vis, FTIR, XRD, and SEM. The photocatalytic performance of Cu2O/graphene nanocomposites was compared against Cu2O nanoparticles for cationic MB dye degradation. Results: Cu2O/graphene nanocomposite exhibits higher photocatalytic activity for MB degradation (with a degradation efficiency of 94%) than pure Cu2O nanoparticles (67%). This has been accomplished after 180 min of irradiation under visible light. The kinetics of MB degradation by Cu2O/graphene composites can be demonstrated by the second-order kinetic model. The synthesized nanocomposite can be used for more than three cycles of photocatalytic MB degradation. Conclusion: This work indicated new insights into Cu2O/graphene nanocomposite as high-performance in photocatalysis to degrade MB, playing a great role in environmental protection in relation to MB dye.Keywords: methylene blue, photocatalysis, cuprous oxide, graphene nanocomposite
Procedia PDF Downloads 1911617 An In-Depth Experimental Study of Wax Deposition in Pipelines
Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.
Abstract:
Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop
Procedia PDF Downloads 1061616 Research of Actuators of Common Rail Injection Systems with the Use of LabVIEW on a Specially Designed Test Bench
Authors: G. Baranski, A. Majczak, M. Wendeker
Abstract:
Currently, the most commonly used solution to provide fuel to the diesel engines is the Common Rail system. Compared to previous designs, as a due to relatively simple construction and electronic control systems, these systems allow achieving favourable engine operation parameters with particular emphasis on low emission of toxic compounds into the atmosphere. In this system, the amount of injected fuel dose is strictly dependent on the course of parameters of the electrical impulse sent by the power amplifier power supply system injector from the engine controller. The article presents the construction of a laboratory test bench to examine the course of the injection process and the expense in storage injection systems. The test bench enables testing of injection systems with electromagnetically controlled injectors with the use of scientific engineering tools. The developed system is based on LabView software and CompactRIO family controller using FPGA systems and a real time microcontroller. The results of experimental research on electromagnetic injectors of common rail system, controlled by a dedicated National Instruments card, confirm the effectiveness of the presented approach. The results of the research described in the article present the influence of basic parameters of the electric impulse opening the electromagnetic injector on the value of the injected fuel dose. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A.’ and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: fuel injector, combustion engine, fuel pressure, compression ignition engine, power supply system, controller, LabVIEW
Procedia PDF Downloads 1311615 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 1171614 Islamic Banking: A New Trend towards the Development of Banking Law
Authors: Inese Tenberga
Abstract:
Undoubtedly, the focus of the present capitalist system of finance has shifted from the concept of productivity of money to the ‘cult of money’, which is characterized by such notions as speculative activity, squander, self-profit, vested interest, etc. The author is certain that a civilized society cannot follow this economic path any longer and therefore suggests that one solution would be to integrate the Islamic financial model in the banking sector of the EU to overcome its economic vulnerability and structurally transform its economies or build resilience against shocks and crisis. The researcher analyses the Islamic financial model, which is providing the basis for the concept of non-productivity of money, and proposes to consider it as a new paradigm of economic thinking. The author argues that it seeks to establish a broad-based economic well-being with an optimum rate of economic growth, socio-economic justice, equitable distribution of income and wealth. Furthermore, the author analyses and proposes to use the experience of member states of the Islamic Development Bank for the formation of a new EU interest free banking. It is offered to create within the EU banking system a credit sector and investment sector respectively. As a part of the latter, it is recommended to separate investment banks specializing in speculative investments and nonspeculative investment banks. Meanwhile, understanding of the idea of Islamic banking exclusively from the perspective of the manner of yielding profit that differs from credit banking, without considering the legal, social, ethical guidelines of Islam impedes to value objectively the advantages of this type of financial activities at the non-Islamic jurisdictions. However, the author comes to the conclusion the imperative of justice and virtue, which is inherent to all of us, exists regardless of religion. The author concludes that the global community should adopt the experience of the Muslim countries and focus on the Islamic banking model.Keywords: credit sector, EU banking system, investment sector, Islamic banking
Procedia PDF Downloads 1771613 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1231612 Surface-Enhanced Raman Spectroscopy-Based Detection of SARS-CoV-2 Through In Situ One-pot Electrochemical Synthesis of 3D Au-Lysate Nanocomposite Structures on Plasmonic Au Electrodes
Authors: Ansah Iris Baffour, Dong-Ho Kim, Sung-Gyu Park
Abstract:
The ongoing COVID-19 pandemic, caused by the SARS-CoV-2 virus and is gradually shifting to an endemic phase which implies the outbreak is far from over and will be difficult to eradicate. Global cooperation has led to unified precautions that aim to suppress epidemiological spread (e.g., through travel restrictions) and reach herd immunity (through vaccinations); however, the primary strategy to restrain the spread of the virus in mass populations relies on screening protocols that enable rapid on-site diagnosis of infections. Herein, we employed surface enhanced Raman spectroscopy (SERS) for the rapid detection of SARS-CoV-2 lysate on an Au-modified Au nanodimple(AuND)electrode. Through in situone-pot Au electrodeposition on the AuND electrode, Au-lysate nanocomposites were synthesized, generating3D internal hotspots for large SERS signal enhancements within 30 s of the deposition. The capture of lysate into newly generated plasmonic nanogaps within the nanocomposite structures enhanced metal-spike protein contact in 3D spaces and served as hotspots for sensitive detection. The limit of detection of SARS-CoV-2 lysate was 5 x 10-2 PFU/mL. Interestingly, ultrasensitive detection of the lysates of influenza A/H1N1 and respiratory syncytial virus (RSV) was possible, but the method showed ultimate selectivity for SARS-CoV-2 in lysate solution mixtures. We investigated the practical application of the approach for rapid on-site diagnosis by detecting SARS-CoV-2 lysate spiked in normal human saliva at ultralow concentrations. The results presented demonstrate the reliability and sensitivity of the assay for rapid diagnosis of COVID-19.Keywords: label-free detection, nanocomposites, SARS-CoV-2, surface-enhanced raman spectroscopy
Procedia PDF Downloads 1231611 Evaluating the Effectiveness of Mesotherapy and Topical 2% Minoxidil for Androgenic Alopecia in Females, Using Topical 2% Minoxidil as a Common Treatment
Authors: Hamed Delrobai Ghoochan Atigh
Abstract:
Androgenic alopecia (AGA) is a common form of hair loss, impacting approximately 50% of females, which leads to reduced self-esteem and quality of life. It causes progressive follicular miniaturization in genetically predisposed individuals. Mesotherapy -- a minimally invasive procedure, topical 2% minoxidil, and oral finasteride have emerged as popular treatment options in the realm of cosmetics. However, the efficacy of mesotherapy compared to other options remains unclear. This study aims to assess the effectiveness of mesotherapy when it is added to topical 2% minoxidil treatment on female androgenic alopecia. Mesotherapy, also known as intradermotherapy, is a technique that entails administering multiple intradermal injections of a carefully composed mixture of compounds in low doses, applied at various points in close proximity to or directly over the affected areas. This study involves a randomized controlled trial with 100 female participants diagnosed with androgenic alopecia. The subjects were randomly assigned to two groups: Group A used topical 2% minoxidil twice daily and took Finastride oral tablet. For Group B, 10 mesotherapy sessions were added to the prior treatment. The injections were administered every week in the first month of treatment, every two weeks in the second month, and after that the injections were applied monthly for four consecutive months. The response assessment was made at baseline, the 4th session, and finally after 6 months when the treatment was complete. Clinical photographs, 7-point Likert scale patient self-evaluation, and 7-point Likert scale assessment tool were used to measure the effectiveness of the treatment. During this evaluation, a significant and visible improvement in hair density and thickness was observed. The study demonstrated a significant increase in treatment efficacy in Group B compared to Group A post-treatment, with no adverse effects. Based on the findings, it appears that mesotherapy offers a significant improvement in female AGA over minoxidil. Hair loss was stopped in Group B after one month and improvement in density and thickness of hair was observed after the third month. The findings from this study provide valuable insights into the efficacy of mesotherapy in treating female androgenic alopecia. Our evaluation offers a detailed assessment of hair growth parameters, enabling a better understanding of the treatments' effectiveness. The potential of this promising technique is significantly enhanced when carried out in a medical facility, guided by appropriate indications and skillful execution. An interesting observation in our study is that in areas where the hair had turned grey, the newly regrown hair does not retain its original grey color; instead, it becomes darker. The results contribute to evidence-based decision-making in dermatological practice and offer different insights into the treatment of female pattern hair loss.Keywords: androgenic alopecia, female hair loss, mesotherapy, topical 2% minoxidil
Procedia PDF Downloads 1031610 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics
Authors: A. Abbas, X. Tridon, J. Michelon
Abstract:
In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film
Procedia PDF Downloads 159