Search results for: shape prediction
366 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160365 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution
Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi
Abstract:
Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.Keywords: adsorption, hydrogel, nanocomposite, super adsorbent
Procedia PDF Downloads 187364 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc
Abstract:
Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.Keywords: numerical model, additive manufacturing, friction, process
Procedia PDF Downloads 147363 Properties of the CsPbBr₃ Quantum Dots Treated by O₃ Plasma for Integration in the Perovskite Solar Cell
Authors: Sh. Sousani, Z. Shadrokh, M. Hofbauerová, J. Kollár, M. Jergel, P. Nádaždy, M. Omastová, E. Majková
Abstract:
Perovskite quantum dots (PQDs) have the potential to increase the performance of the perovskite solar cell (PSCs). The integration of PQDs into PSCs can extend the absorption range and enhance photon harvesting and device efficiency. In addition, PQDs can stabilize the device structure by passivating surface defects and traps in the perovskite layer and enhance its stability. The integration of PQDs into PSCs is strongly affected by the type of ligands on the surface of PQDs. The ligands affect the charge transport properties of PQDs, as well as the formation of well-defined interfaces and stability of PSCs. In this work, the CsPbBr₃ QDs were synthesized by the conventional hot-injection method using cesium oleate, PbBr₂ and two different ligands, namely oleic acid (OA) oleylamine (OAm) and didodecyldimethylammonium bromide (DDAB). The STEM confirmed regular shape and relatively monodisperse cubic structure with an average size of about 10-14 nm of the prepared CsPbBr₃ QDs. Further, the photoluminescent (PL) properties of the PQDs/perovskite bilayer with the ligand OA, OAm and DDAB were studied. For this purpose, ITO/PQDs as well as ITO/PQDs/MAPI perovskite structures were prepared by spin coating and the effect of the ligand and oxygen plasma treatment was analyzed. The plasma treatment of the PQDs layer could be beneficial for the deposition of the MAPI perovskite layer and the formation of a well-defined PQDs/MAPI interface. The absorption edge in UV-Vis absorption spectra for OA, OAm CsPbBr₃ QDs is placed around 513 nm (the band gap 2.38 eV); for DDAB CsPbBr₃ QDs, it is located at 490 nm (the band gap 2.33 eV). The photoluminescence (PL) spectra of CsPbBr₃ QDs show two peaks located around 514 nm (503 nm) and 718 nm (708 nm) for OA, OAm (DDAB). The peak around 500 nm corresponds to the PL of PQDs, and the peak close to 710 nm belongs to the surface states of PQDs for both types of ligands. These surface states are strongly affected by the O₃ plasma treatment. For PQDs with DDAB ligand, the O₃ exposure (5, 10, 15 s) results in the blue shift of the PQDs peak and a non-monotonous change of the amplitude of the surface states' peak. For OA, OAm ligand, the O₃ exposition did not cause any shift of the PQDs peak, and the intensity of the PL peak related to the surface states is lower by one order of magnitude in comparison with DDAB, being affected by O₃ plasma treatment. The PL results indicate the possibility of tuning the position of the PL maximum by the ligand of the PQDs. Similar behavior of the PQDs layer was observed for the ITO/QDs/MAPI samples, where an additional strong PL peak at 770 nm coming from the perovskite layer was observed; for the sample with PQDs with DDAB ligands, a small blue shift of the perovskite PL maximum was observed independently of the plasma treatment. These results suggest the possibility of affecting the PL maximum position and the surface states of the PQDs by the combination of a suitable ligand and the O₃ plasma treatment.Keywords: perovskite quantum dots, photoluminescence, O₃ plasma., Perovskite Solar Cells
Procedia PDF Downloads 64362 Properties of the CsPbBr₃ Quantum Dots Treated by O₃ Plasma for Integration in the Perovskite Solar Cell
Authors: Sh. Sousani, Z. Shadrokh, M. Hofbauerová, J. Kollár, M. Jergel, P. Nádaždy, M. Omastová, E. Majková
Abstract:
Perovskite quantum dots (PQDs) have the potential to increase the performance of the perovskite solar cells (PSCs). The integration of PQDs into PSCs can extend the absorption range and enhance photon harvesting and device efficiency. In addition, PQDs can stabilize the device structure by passivating surface defects and traps in the perovskite layer and enhance its stability. The integration of PQDs into PSCs is strongly affected by the type of ligands on the surface of PQDs. The ligands affect the charge transport properties of PQDs, as well as the formation of well-defined interfaces and stability of PSCs. In this work, the CsPbBr₃ QDs were synthesized by the conventional hot-injection method using cesium oleate, PbBr₂, and two different ligands, namely oleic acid (OA)@oleylamine (OAm) and didodecyldimethylammonium bromide (DDAB). The STEM confirmed regular shape and relatively monodisperse cubic structure with an average size of about 10-14 nm of the prepared CsPbBr₃ QDs. Further, the photoluminescent (PL) properties of the PQDs/perovskite bilayer with the ligand OA@OAm and DDAB were studied. For this purpose, ITO/PQDs, as well as ITO/PQDs/MAPI perovskite structures, were prepared by spin coating, and the effect of the ligand and oxygen plasma treatment was analysed. The plasma treatment of the PQDs layer could be beneficial for the deposition of the MAPI perovskite layer and the formation of a well-defined PQDs/MAPI interface. The absorption edge in UV-Vis absorption spectra for OA@OAm CsPbBr₃ QDs is placed around 513 nm (the band gap 2.38 eV); for DDAB CsPbBr₃ QDs, it is located at 490 nm (the band gap 2.33 eV). The photoluminescence (PL) spectra of CsPbBr₃ QDs show two peaks located around 514 nm (503 nm) and 718 nm (708 nm) for OA@OAm (DDAB). The peak around 500 nm corresponds to the PL of PQDs, and the peak close to 710 nm belongs to the surface states of PQDs for both types of ligands. These surface states are strongly affected by the O₃ plasma treatment. For PQDs with DDAB ligand, the O₃ exposure (5, 10, 15 s) results in the blue shift of the PQDs peak and a non-monotonous change of the amplitude of the surface states' peak. For OA@OAm ligand, the O₃ exposition did not cause any shift of the PQDs peak, and the intensity of the PL peak related to the surface states is lower by one order of magnitude in comparison with DDAB, being affected by O₃ plasma treatment. The PL results indicate the possibility of tuning the position of the PL maximum by the ligand of the PQDs. Similar behaviour of the PQDs layer was observed for the ITO/QDs/MAPI samples, where an additional strong PL peak at 770 nm coming from the perovskite layer was observed; for the sample with PQDs with DDAB ligands, a small blue shift of the perovskite PL maximum was observed independently of the plasma treatment. These results suggest the possibility of affecting the PL maximum position and the surface states of the PQDs by the combination of a suitable ligand and the O₃ plasma treatment.Keywords: perovskite quantum dots, photoluminescence, O₃ plasma., perovskite solar cells
Procedia PDF Downloads 70361 First-Trimester Screening of Preeclampsia in a Routine Care
Authors: Tamar Grdzelishvili, Zaza Sinauridze
Abstract:
Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein
Procedia PDF Downloads 77360 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan
Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva
Abstract:
Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.Keywords: antibodies, blood serum, immunity, immunoglobulin
Procedia PDF Downloads 255359 The Relationship between Anthropometric Obesity Indices and Insulin in Children with Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
The number of indices developed for the evaluation of obesity both in adults and pediatric population is ever increasing. These indices are also used in cases with metabolic syndrome (MetS), mostly the ultimate form of morbid obesity. Aside from anthropometric measurements, formulas constituted using these parameters also find clinical use. These formulas can be listed as two groups; being weight-dependent and –independent. Some are extremely sophisticated equations and their clinical utility is questionable in routine clinical practice. The aim of this study is to compare presently available obesity indices and find the most practical one. Their associations with MetS components were also investigated to determine their capacities in differential diagnosis of morbid obesity with and without MetS. Children with normal body mass index (N-BMI) and morbid obesity were recruited for this study. Three groups were constituted. Age- and sex- dependent BMI percentiles for morbid obese (MO) children were above 99 according to World Health Organization tables. Of them, those with MetS findings were evaluated as MetS group. Children, whose values were between 85 and 15 were included in N-BMI group. The study protocol was approved by the Ethics Committee of the Institution. Parents filled out informed consent forms to participate in the study. Anthropometric measurements and blood pressure values were recorded. Body mass index, hip index (HI), conicity index (CI), triponderal mass index (TPMI), body adiposity index (BAI), body shape index (ABSI), body roundness index (BRI), abdominal volume index (AVI), waist-to-hip ratio (WHR) and waist circumference+hip circumference/2 ((WC+HC)/2) were the formulas examined within the scope of this study. Routine biochemical tests including fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) were performed. Statistical package program SPSS was used for the evaluation of study data. p<0.05 was accepted as the statistical significance degree. Hip index did not differ among the groups. A statistically significant difference was noted between N-BMI and MetS groups in terms of ABSI. All the other indices were capable of making discrimination between N-BMI-MO, N-BMI- MetS and MO-MetS groups. No correlation was found between FBG and any obesity indices in any groups. The same was true for INS in N-BMI group. Insulin was correlated with BAI, TPMI, CI, BRI, AVI and (WC+HC)/2 in MO group without MetS findings. In MetS group, the only index, which was correlated with INS was (WC+HC)/2. These findings have pointed out that complicated formulas may not be required for the evaluation of the alterations among N-BMI and various obesity groups including MetS. The simple easily computable weight-independent index, (WC+HC)/2, was unique, because it was the only index, which exhibits a valuable association with INS in MetS group. It did not exhibit any correlation with other obesity indices showing associations with INS in MO group. It was concluded that (WC+HC)/2 was pretty valuable practicable index for the discrimination of MO children with and without MetS findings.Keywords: children, insulin, metabolic syndrome, obesity indices
Procedia PDF Downloads 77358 Human Creativity through Dooyeweerd's Philosophy: The Case of Creative Diagramming
Authors: Kamaran Fathulla
Abstract:
Human creativity knows no bounds. More than a millennia ago humans have expressed their knowledge on cave walls and on clay artefacts. Using visuals such as diagrams and paintings have always provided us with a natural and intuitive medium for expressing such creativity. Making sense of human generated visualisation has been influenced by western scientific philosophies which are often reductionist in their nature. Theoretical frameworks such as those delivered by Peirce dominated our views of how to make sense of visualisation where a visual is seen as an emergent property of our thoughts. Others have reduced the richness of human-generated visuals to mere shapes drawn on a piece of paper or on a screen. This paper introduces an alternate framework where the centrality of human functioning is given explicit and richer consideration through the multi aspectual philosophical works of Herman Dooyeweerd. Dooyeweerd's framework of understanding reality was based on fifteen aspects of reality, each having a distinct core meaning. The totality of the aspects formed a ‘rainbow’ like spectrum of meaning. The thesis of this approach is that meaningful human functioning in most cases involves the diversity of all aspects working in synergy and harmony. Illustration of the foundations and applicability of this approach is underpinned in the case of humans use of diagramming for creative purposes, particularly within an educational context. Diagrams play an important role in education. Students and lecturers use diagrams as a powerful tool to aid their thinking. However, research into the role of diagrams used in education continues to reveal difficulties students encounter during both processes of interpretation and construction of diagrams. Their main problems shape up students difficulties with diagrams. The ever-increasing diversity of diagrams' types coupled with the fact that most real-world diagrams often contain a mix of these different types of diagrams such as boxes and lines, bar charts, surfaces, routes, shapes dotted around the drawing area, and so on with each type having its own distinct set of static and dynamic semantics. We argue that the persistence of these problems is grounded in our existing ways of understanding diagrams that are often reductionist in their underpinnings driven by a single perspective or formalism. In this paper, we demonstrate the limitations of these approaches in dealing with the three problems. Consequently, we propose, discuss, and demonstrate the potential of a nonreductionist framework for understanding diagrams based on Symbolic and Spatial Mappings (SySpM) underpinned by Dooyeweerd philosophy. The potential of the framework to account for the meaning of diagrams is demonstrated by applying it to a real-world case study physics diagram.Keywords: SySpM, drawing style, mapping
Procedia PDF Downloads 238357 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 234356 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics
Authors: Weikang Gong, Chunhua Li
Abstract:
Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure
Procedia PDF Downloads 121355 Media Response to Kashmir Conflict: How Press Differed in Highlighting Protest Shutdowns between 1990-2010
Authors: Danish Gadda
Abstract:
Kashmir has been a bleeding-spot in the South Asian politics since 1947 when the subcontinent was bifurcated into Hindu, India and Muslim Pakistan by the departing British colonisers. Kashmir couldn’t accede to either of the two new-born, sovereign nations until tribal invasion from Pakistan forced an unfortunate change of events. India, driven by conditional accession signed by the Kashmir’s last monarch, sent its army to defend Kashmir Valley, with a promise, made subsequently, that the region’s fate would be decided by the natives through an internationally-monitored plebiscite. The country, however, broke its promise, choosing not to withdraw its military to allow the plebiscite, and, instead, strengthened its claim over Kashmir, which it later started describing as her integral part. War, fought in the shape of three and a half bloody battles, ensued between India and Pakistan, even as the United Nations’ intervention managed a ceasefire as early as in the 1950s, though not before Kashmir had come to be divided into its India-controlled and Pakistan-controlled halves. Prolonged, the dispute over Kashmir took a violent turn in 1989-90 with the start of an anti-India armed rebellion. Kashmiris have been fighting for their right to self-determination, and bringing their own life to a grinding halt has been one of their preferred forms of protest against the Indian rule. This form of resistance is locally called ‘Hartals’, and recognised as shutdowns, which have often been prolonged and violent. Since 1989-90, the shutdowns have become only more frequent and forceful, and there are marked days on which Kashmir shuts down in protest every year, like a ritual. This paper is based on a study of how the Indian and Kashmir press covered the shutdowns observed in the troubled valley on four such days: January 26 (Indian Republic Day), February 11 (the day on which India executed a prominent Kashmiri resistance leader), August 15 (India’s Independence Day), and October 27 (the day on which the Indian military has landed in Kashmir). The coverage given by the Indian and Kashmiri press to the shutdowns observed on these days has been studied using the multi-tier content analysis approach: 1) Difference in the number of shutdowns covered by the two section is looked at, 2) the placement of the stories in the two section of the press is analysed, 3) the discourse highlighted by the two section of the press is compared, and 4) the editorials written by the two section of the press about the shutdowns are analysed. The findings show the Indian and the local press have been focussing on the two, predictable extremes of the situation: the Indian press has favoured the state, while the Kashmir or the local press has focussed on the narrative opposing the state’s. The difference is noticed in the quantitative as well as the qualitative aspects of their coverage.Keywords: Indo-Pak tension, Kashmir conflict, protest shutdowns, South-Asian politics
Procedia PDF Downloads 232354 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 144353 Developing a Quality Mentor Program: Creating Positive Change for Students in Enabling Programs
Authors: Bianca Price, Jennifer Stokes
Abstract:
Academic and social support systems are critical for students in enabling education; these support systems have the potential to enhance the student experience whilst also serving a vital role for student retention. In the context of international moves toward widening university participation, Australia has developed enabling programs designed to support underrepresented students to access to higher education. The purpose of this study is to examine the effectiveness of a mentor program based within an enabling course. This study evaluates how the mentor program supports new students to develop social networks, improve retention, and increase satisfaction with the student experience. Guided by Social Learning Theory (SLT), this study highlights the benefits that can be achieved when students engage in peer-to-peer based mentoring for both social and learning support. Whilst traditional peer mentoring programs are heavily based on face-to-face contact, the present study explores the difference between mentors who provide face-to-face mentoring, in comparison with mentoring that takes place through the virtual space, specifically via a virtual community in the shape of a Facebook group. This paper explores the differences between these two methods of mentoring within an enabling program. The first method involves traditional face-to-face mentoring that is provided by alumni students who willingly return to the learning community to provide social support and guidance for new students. The second method requires alumni mentor students to voluntarily join a Facebook group that is specifically designed for enabling students. Using this virtual space, alumni students provide advice, support and social commentary on how to be successful within an enabling program. Whilst vastly different methods, both of these mentoring approaches provide students with the support tools needed to enhance their student experience and improve transition into University. To evaluate the impact of each mode, this study uses mixed methods including a focus group with mentors, in-depth interviews, as well as engaging in netnography of the Facebook group ‘Wall’. Netnography is an innovative qualitative research method used to interpret information that is available online to better understand and identify the needs and influences that affect the users of the online space. Through examining the data, this research will reflect upon best practice for engaging students in enabling programs. Findings support the applicability of having both face-to-face and online mentoring available for students to assist enabling students to make a positive transition into University undergraduate studies.Keywords: enabling education, mentoring, netnography, social learning theory
Procedia PDF Downloads 121352 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil
Authors: Linlong Mu, Maosong Huang
Abstract:
Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement
Procedia PDF Downloads 231351 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics
Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí
Abstract:
A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding
Procedia PDF Downloads 97350 Nascent Federalism in Nepal: An Observational Review in its Evolution
Authors: C. Shekhar Parajulee
Abstract:
Nepal practiced a centralized unitary governing system for a long and has gone through the federal system after the promulgation of the new constitution on 20 September 2015. There is a big paradigm shift in terms of governance after it. Now, there are three levels of governments, one federal government in the center, seven provincial governments and 753 local governments. Federalism refers to a political governing system with multiple tiers of government working together with coordination. It is preferred for self and shared rule. Though it has opened the door for rights of the people, political stability, state restructuring, and sustainable peace and development, there are many prospects and challenges for its proper implementation. This research analyzes the discourses of federalism implementation in Nepal with special reference to one of seven provinces, Gandaki. Federalism is a new phenomenon in Nepali politics and informed debates on it are required for its right evolution. This research will add value in this regard. Moreover, tracking its evolution and the exploration of the attitudes and behaviors of key actors and stakeholders in a new experiment of a new governing system is also important. The administrative and political system of Gandaki province in terms of service delivery and development will critically be examined. Besides demonstrating the performances of the provincial government and assembly, it will analyze the inter-governmental relation of Gandaki with the other two tiers of government. For this research, people from provincial and local governments (elected representatives and government employees), provincial assembly members, academicians, civil society leaders and journalists are being interviewed. The interview findings will be analyzed by supplementing with published documents. Just going into the federal structure is not the solution. As in the case of other provincial governments, Gandaki had also to start from scratch. It gradually took a shape of government and has been functioning sluggishly. The provincial government has many challenges ahead, which has badly hindered its plans and actions. Additionally, fundamental laws, infrastructures and human resources are found to be insufficient at the sub-national level. Lack of clarity in the jurisdiction is another main challenge. The Nepali Constitution assumes cooperation, coexistence and coordination as the fundamental principles of federalism which, unfortunately, appear to be lacking among the three tiers of government despite their efforts. Though the devolution of power to sub-national governments is essential for the successful implementation of federalism, it has apparently been delayed due to the centralized mentality of bureaucracy as well as a political leader. This research will highlight the reasons for the delay in the implementation of federalism. There might be multiple underlying reasons for the slow pace of implementation of federalism and identifying them is very tough. Moreover, the federal spirit is found to be absent in the main players of today's political system, which is a big irony. So, there are some doubts about whether the federal system in Nepal is just a keepsake or a substantive.Keywords: federalism, inter-governmental relations, Nepal, provincial government
Procedia PDF Downloads 189349 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 418348 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India
Authors: Bharti Singh, Shri K. Singh
Abstract:
Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.Keywords: child nutrition, India, NFHS, women’s empowerment
Procedia PDF Downloads 34347 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 201346 The Anti-Globalization Movement, Brexit, Outsourcing and the Current State of Globalization
Authors: Alexis Naranjo
Abstract:
In the current global stage, a new sense and mix feelings against the globalization has started to take shape thanks to events such as Brexit and the 2016 US election. The perceptions towards the globalization have started to focus in a resistance movement called the 'anti-globalization movement'. This paper examines the current global stage vs. leadership decisions in a time when market integrations are not longer seeing as an opportunity for an economic growth buster. The biggest economy in the world the United States of America has started to face a new beginning of something called 'anti-globalization', in the current global stage starting with the United Kingdom to the United States a new strategy to help local economies has started to emerge. A new nationalist movement has started to focus on their local economies which now represents a direct threat to the globalization, trade agreements, wages and free markets. Business leaders of multinationals now in our days face a new dilemma, how to address the feeling that globalization and outsourcing destroy and take away jobs from local economies. The initial perception of the literature and data rebels that companies in Western countries like the US sees many risks associate with outsourcing, however, saving cost associated with outsourcing is greater than the firm’s local reputation. Starting with India as a good example of a supplier of IT developers, analysts and call centers we can start saying that India is an industrialized nation which has not yet secured its spot and title. India has emerged as a powerhouse in the outsource industry, which makes India hold the number one spot in the world to outsource IT services. Thanks to the globalization of economies and markets around the globe that new ideas to increase productivity at a lower cost has been existing for years and has started to offer new ideas and options to businesses in different industries. The economic growth of the information technology (IT) industry in India is an example of the power of the globalization which in the case of India has been tremendous and significant especially in the economic arena. This research paper concentrates in understand the behavior of business leaders: First, how multinational’s leaders will face the new challenges and what actions help them to lead in turbulent times. Second, if outsourcing or withdraw from a market is an option what are the consequences and how you communicate and negotiate from the business leader perspective. Finally, is the perception of leaders focusing on financial results or they have a different goal? To answer these questions, this study focuses on the most recent data available to outline and present the findings of the reason why outsourcing is and option and second, how and why those decisions are made. This research also explores the perception of the phenomenon of outsourcing in many ways and explores how the globalization has contributed to its own questioning.Keywords: anti-globalization, globalization, leadership, outsourcing
Procedia PDF Downloads 194345 Executive Leadership in Kinesiology, Exercise and Sport Science: The Five 'C' Concept
Authors: Jim Weese
Abstract:
The Kinesiology, Exercise and Sport Science environment remain excellent venues for leadership research. Prescribed leadership (coaching), emergent leadership (players and organizations), and executive leadership are all popular themes in the research literature. Leadership remains a popular area of inquiry in the sport management domain as well as an interesting area for practitioners who wish to heighten their leadership practices and effectiveness. The need for effective leadership in these areas given competing demands for attention and resources may be at an all-time high. The presenter has extensive research and practical experience in the area and has developed his concept based on the latest leadership literature. He refers to this as the Five ’C’s of Leadership. These components, noted below, have been empirically validated and have served as the foundation for extensive consulting with academic, sport, and business leaders. Credibility (C1) is considered the foundation of leadership. There are two components to this area, namely: (a) leaders being respected for having the relevant knowledge, insights, and experience to be seen as credible sources of information, and (b) followers perceiving the leader as being a person of character, someone who is honest, reliable, consistent, and trustworthy. Compelling Vision (C2) refers to the leader’s ability to focus the attention of followers on a desired end goal. Effective leaders understand trends and developments in their industry. They also listen attentively to the needs and desires of their stakeholders and use their own instincts and experience to shape these ideas into an inspiring vision that is effectively and continuously communicated. Charismatic Communicator (C3) refers to the leader’s ability to formally and informally communicate with members. Leaders must deploy mechanisms and communication techniques to keep their members informed and engaged. Effective leaders sprinkle in ‘proof points’ that reinforce the vision’s relevance and/or the unit’s progress towards its attainment. Contagious Enthusiasm (C4) draws on the emotional intelligence literature as it relates to exciting and inspiring followers. Effective leaders demonstrate a level of care, commitment, and passion for their people and feelings of engagement permeate the group. These leaders genuinely care about the task at hand, and for the people working to make it a reality. Culture Builder (C5) is the capstone component of the model and is critical to long-term success and survival. Organizational culture refers to the dominant beliefs, values and attitudes of members of a group or organization. Some have suggested that developing and/or imbedding a desired culture for an organization is the most important responsibility for a leader. The author outlines his Five ‘C’s’ of Leadership concept and provide direct application to executive leadership in Kinesiology, Exercise and Sport Science.Keywords: effectiveness, leadership, management, sport
Procedia PDF Downloads 300344 Development of High-Efficiency Down-Conversion Fluoride Phosphors to Increase the Efficiency of Solar Panels
Authors: S. V. Kuznetsov, M. N. Mayakova, V. Yu. Proydakova, V. V. Pavlov, A. S. Nizamutdinov, O. A. Morozov, V. V. Voronov, P. P. Fedorov
Abstract:
Increase in the share of electricity received by conversion of solar energy results in the reduction of the industrial impact on the environment from the use of the hydrocarbon energy sources. One way to increase said share is to improve the efficiency of solar energy conversion in silicon-based solar panels. Such efficiency increase can be achieved by transferring energy from sunlight-insensitive areas of work of silicon solar panels to the area of their photoresistivity. To achieve this goal, a transition to new luminescent materials with the high quantum yield of luminescence is necessary. Improvement in the quantum yield can be achieved by quantum cutting, which allows obtaining a quantum yield of down conversion of more than 150% due to the splitting of high-energy photons of the UV spectral range into lower-energy photons of the visible and near infrared spectral ranges. The goal of present work is to test approach of excitation through sensibilization of 4f-4f fluorescence of Yb3+ by various RE ions absorbing in UV and Vis spectral ranges. One of promising materials for quantum cutting luminophores are fluorides. In our investigation we have developed synthesis of nano- and submicron powders of calcium fluoride and strontium doped with rare-earth elements (Yb: Ce, Yb: Pr, Yb: Eu) of controlled dimensions and shape by co-precipitation from water solution technique. We have used Ca(NO3)2*4H2O, Sr(NO3)2, HF, NH4F as precursors. After initial solutions of nitrates were prepared they have been mixed with fluorine containing solution by dropwise manner. According to XRD data, the synthesis resulted in single phase samples with fluorite structure. By means of SEM measurements, we have confirmed spherical morphology and have determined sizes of particles (50-100 nm after synthesis and 150-300 nm after calcination). Temperature of calcination appeared to be 600°C. We have investigated the spectral-kinetic characteristics of above mentioned compounds. Here the diffuse reflection and laser induced fluorescence spectra of Yb3+ ions excited at around 4f-4f and 4f-5d transitions of Pr3+, Eu3+ and Ce3+ ions in the synthesized powders are reported. The investigation of down conversion luminescence capability of synthesized compounds included measurements of fluorescence decays and quantum yield of 2F5/2-2F7/2 fluorescence of Yb3+ ions as function of Yb3+ and sensitizer contents. An optimal chemical composition of CaF2-YbF3- LnF3 (Ln=Ce, Eu, Pr), SrF2-YbF3-LnF3 (Ln=Ce, Eu, Pr) micro- and nano- powders according to criteria of maximal IR fluorescence yield is proposed. We suppose that investigated materials are prospective in solar panels improvement applications. Work was supported by Russian Science Foundation grant #17-73- 20352.Keywords: solar cell, fluorides, down-conversion luminescence, maximum quantum yield
Procedia PDF Downloads 272343 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes
Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha
Abstract:
Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component
Procedia PDF Downloads 166342 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 105341 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell
Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman
Abstract:
Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve
Procedia PDF Downloads 557340 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending
Authors: Enrico Dorigatti
Abstract:
The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.Keywords: circuit bending, ecology, sound art, sustainability
Procedia PDF Downloads 171339 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 164338 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207337 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 134