Search results for: Song Wang
265 Montelukast Doesn’t Decrease the Risk of Cardiovascular Disease in Asthma Patients in Taiwan
Authors: Sheng Yu Chen, Shi-Heng Wang
Abstract:
Aim: Based on human, animal experiments, and genetic studies, cysteinyl leukotrienes, LTC4, LTD4, and LTE4, are inflammatory substances that are metabolized by 5-lipooxygenase from arachidonic acid, and these substances trigger asthma. In addition, the synthetic pathway of cysteinyl leukotriene is relevant to the increase in cardiovascular diseases such as myocardial ischemia and stroke. Given the situation, we aim to investigate whether cysteinyl leukotrienes receptor antagonist (LTRA), montelukast which cures those who have asthma has potential protective effects on cardiovascular diseases. Method: We conducted a cohort study, and enrolled participants which are newly diagnosed with asthma (ICD-9 CM code 493. X) between 2002 to 2011. The data source is from Taiwan National Health Insurance Research Database Patients with a previous history of myocardial infarction or ischemic stroke were excluded. Among the remaining participants, every montelukast user was matched with two randomly non-users by sex, and age. The incident cardiovascular diseases, including myocardial infarction and ischemic stroke, were regarded as outcomes. We followed the participants until outcomes come first or the end of the following period. To explore the protective effect of montelukast on the risk of cardiovascular disease, we use multivariable Cox regression to estimate the hazard ratio with adjustment for potential confounding factors. Result: There are 55876 newly diagnosed asthma patients who had at least one claim of inpatient admission or at least three claims of outpatient records. We enrolled 5350 montelukast users and 10700 non-users in this cohort study. The following mean (±SD) time of the Montelukast group is 5 (±2.19 )years, and the non-users group is 6.2 5.47 (± 2.641) years. By using multivariable Cox regression, our analysis indicated that the risk of incident cardiovascular diseases between montelukast users (n=43, 0.8%) and non-users (n=111, 1.04%) is approximately equal. [adjusted hazard ratio 0.992; P-value:0.9643] Conclusion: In this population-based study, we found that the use of montelukast is not associated with a decrease in incident MI or IS.Keywords: asthma, inflammation, montelukast, insurance research database, cardiovascular diseases
Procedia PDF Downloads 83264 Water Access and Food Security: A Cross-Sectional Study of SSA Countries in 2017
Authors: Davod Ahmadi, Narges Ebadi, Ethan Wang, Hugo Melgar-Quiñonez
Abstract:
Compared to the other Least Developed Countries (LDCs), major countries in sub-Saharan Africa (SSA) have limited access to the clean water. People in this region, and more specifically females, suffer from acute water scarcity problems. They are compelled to spend too much of their time bringing water for domestic use like drinking and washing. Apart from domestic use, water through affecting agriculture and livestock contributes to the food security status of people in vulnerable regions like SSA. Livestock needs water to grow, and agriculture requires enormous quantities of water for irrigation. The main objective of this study is to explore the association between access to water and individuals’ food security status. Data from 2017 Gallup World Poll (GWP) for SSA were analyzed (n=35,000). The target population in GWP is the entire civilian, non-institutionalized, aged 15 and older population. All samples selection is probability based and nationally representative. The Gallup surveys an average of 1,000 samples of individuals per country. Three questions related to water (i.e., water quality, availability of water for crops and availability of water for livestock) were used as the exposure variables. Food Insecurity Experience Scale (FIES) was used as the outcome variable. FIES measures individuals’ food security status, and it is composed of eight questions with simple dichotomous responses (1=Yes and 0=No). Different statistical analyses such as descriptive, crosstabs and binary logistic regression, form the basis of this study. Results from descriptive analyses showed that more than 50% of the respondents had no access to enough water for crops and livestock. More than 85% of respondents were categorized as “food insecure”. Findings from cross-tabulation analyses showed that food security status was significantly associated with water quality (0.135; P=0.000), water for crops (0.106; P=0.000) and water for livestock (0.112; P=0.000). In regression analyses, the probability of being food insecure increased among people who expressed no satisfaction with water quality (OR=1.884 (OR=1.768-2.008)), not enough water for crops (OR=1.721 (1.616-1.834)) and not enough water for livestock (OR=1.706 (1.819)). In conclusion, it should note that water access affects food security status in SSA.Keywords: water access, agriculture, livestock, FIES
Procedia PDF Downloads 152263 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 134262 Modeling the Present Economic and Social Alienation of Working Class in South Africa in the Musical Production ‘from Marikana to Mahagonny’ at Durban University of Technology (DUT)
Authors: Pamela Tancsik
Abstract:
The stage production in 2018, titled ‘From‘Marikana to Mahagonny’, began with a prologue in the form of the award-winning documentary ‘Miners Shot Down' by Rehad Desai, followed by Brecht/Weill’s song play or scenic cantata ‘Mahagonny’, premièred in Baden-Baden 1927. The central directorial concept of the DUT musical production ‘From Marikana to Mahagonny’ was to show a connection between the socio-political alienation of mineworkers in present-day South Africa and Brecht’s alienation effect in his scenic cantata ‘Mahagonny’. Marikana is a mining town about 50 km west of South Africa’s capital Pretoria. Mahagonny is a fantasy name for a utopian mining town in the United States. The characters, setting, and lyrics refer to America with of songs like ‘Benares’ and ‘Moon of Alabama’ and the use of typical American inventions such as dollars, saloons, and the telephone. The six singing characters in ‘Mahagonny’ all have typical American names: Charlie, Billy, Bobby, Jimmy, and the two girls they meet later are called Jessie and Bessie. The four men set off to seek Mahagonny. For them, it is the ultimate dream destination promising the fulfilment of all their desires, such as girls, alcohol, and dollars – in short, materialistic goals. Instead of finding a paradise, they experience how money and the practice of exploitive capitalism, and the lack of any moral and humanity is destroying their lives. In the end, Mahagonny gets demolished by a hurricane, an event which happened in 1926 in the United States. ‘God’ in person arrives disillusioned and bitter, complaining about violent and immoral mankind. In the end, he sends them all to hell. Charlie, Billy, Bobby, and Jimmy reply that this punishment does not mean anything to them because they have already been in hell for a long time – hell on earth is a reality, so the threat of hell after life is meaningless. Human life was also taken during the stand-off between striking mineworkers and the South African police on 16 August 2012. Miners from the Lonmin Platinum Mine went on an illegal strike, equipped with bush knives and spears. They were striking because their living conditions had never improved; they still lived in muddy shacks with no running water and electricity. Wages were as low as R4,000 (South African Rands), equivalent to just over 200 Euro per month. By August 2012, the negotiations between Lonmin management and the mineworkers’ unions, asking for a minimum wage of R12,500 per month, had failed. Police were sent in by the Government, and when the miners did not withdraw, the police shot at them. 34 were killed, some by bullets in their backs while running away and trying to hide behind rocks. In the musical play ‘From Marikana to Mahagonny’ audiences in South Africa are confronted with a documentary about Marikana, followed by Brecht/Weill’s scenic cantata, highlighting the tragic parallels between the Mahagonny story and characters from 1927 America and the Lonmin workers today in South Africa, showing that in 95 years, capitalism has not changed.Keywords: alienation, brecht/Weill, mahagonny, marikana/South Africa, musical theatre
Procedia PDF Downloads 98261 A Serum- And Feeder-Free Culture System for the Robust Generation of Human Stem Cell-Derived CD19+ B Cells and Antibody-Secreting Cells
Authors: Kirsten Wilson, Patrick M. Brauer, Sandra Babic, Diana Golubeva, Jessica Van Eyk, Tinya Wang, Avanti Karkhanis, Tim A. Le Fevre, Andy I. Kokaji, Allen C. Eaves, Sharon A. Louis, , Nooshin Tabatabaei-Zavareh
Abstract:
Long-lived plasma cells are rare, non-proliferative B cells generated from antibody-secreting cells (ASCs) following an immune response to protect the host against pathogen re-exposure. Despite their therapeutic potential, the lack of in vitro protocols in the field makes it challenging to use B cells as a cellular therapeutic tool. As a result, there is a need to establish robust and reproducible methods for the generation of B cells. To address this, we have developed a culture system for generating B cells from hematopoietic stem and/or progenitor cells (HSPCs) derived from human umbilical cord blood (CB) or pluripotent stem cells (PSCs). HSPCs isolated from CB were cultured using the StemSpan™ B Cell Generation Kit and produced CD19+ B cells at a frequency of 23.2 ± 1.5% and 59.6 ± 2.3%, with a yield of 91 ± 11 and 196 ± 37 CD19+ cells per input CD34+ cell on culture days 28 and 35, respectively (n = 50 - 59). CD19+IgM+ cells were detected at a frequency of 31.2 ± 2.6% and were produced at a yield of 113 ± 26 cells per input CD34+ cell on culture day 35 (n = 50 - 59). The B cell receptor loci of CB-derived B cells were sequenced to confirm V(D)J gene rearrangement. ELISpot analysis revealed that ASCs were generated at a frequency of 570 ± 57 per 10,000 day 35 cells, with an average IgM+ ASC yield of 16 ± 2 cells per input CD34+ cell (n = 33 - 42). PSC-derived HSPCs were generated using the STEMdiff™ Hematopoietic - EB reagents and differentiated to CD10+CD19+ B cells with a frequency of 4 ± 0.8% after 28 days of culture (n = 37, 1 embryonic and 3 induced pluripotent stem cell lines tested). Subsequent culture of PSC-derived HSPCs increased CD19+ frequency and generated ASCs from 1 - 2 iPSC lines. This method is the first report of a serum- and feeder-free system for the generation of B cells from CB and PSCs, enabling further B lineage-specific research for potential future clinical applications.Keywords: stem cells, B cells, immunology, hematopoiesis, PSC, differentiation
Procedia PDF Downloads 59260 Magnetic Resonance Imaging in Cochlear Implant Patients without Magnet Removal: A Safe and Effective Workflow Management Program
Authors: Yunhe Chen, Xinyun Liu, Qian Wang, Jianan Li
Abstract:
Background Cochlear implants (CIs) are currently the primary effective treatment for severe or profound sensorineural hearing loss. As China's population ages and the number of young children rises, the demand for MRI for CI patients is expected to increase. Methods Reviewed MRI cases of 25 CI patients between 2015 and 2024, assessed imaging auditory outcomes and adverse reactions. Use the adverse event record sheet and accompanying medication sheet to record follow-up measures. Results Most CI patients undergoing MRI may face risks such as artifacts, pain, redness, swelling, tissue damage, bleeding, and magnet displacement or demagnetization. Twenty-five CI patients in our hospital were reviewed. Seven patient underwent 3.0 T MR, the others underwent 1.5 T MR. The manufacturers are 18 cases in Austria, 5 cases in Australia and 2 cases in Nurotron. Among them, one patient with bilateral CI underwent 1.5 T MR examination after head pressure bandaging, and the left magnet was displaced (CI24RE Series, Australia). This patient underwent surgical replacement of the magnet under general anesthesia. Six days after the operation, the patient's feedback indicated that the performance of the cochlear implant was consistent with the previous results following the reactivation of the external device. Based on the experience of our hospital, we proposed the feasible management scheme of MRI examination procedure for CI patients. This plan should include a module for confirming MRI imaging parameters, informed consent, educational materials for patients, and other safety measures to ensure that patients receive imaging results safely and effectively, implify clinical. Conclusion As indications for both MRI and cochlear implantation expand,the number of MRI studies recommended for patients with cochlear implants will also increase. The process and management scheme proposed in this study can help to obtain imaging results safely and effectively, and reduce clinical stress.Keywords: cochlear implantation, MRI, magnet, displacement
Procedia PDF Downloads 15259 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance
Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang
Abstract:
The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.Keywords: bone, mice bone, NMR, water in bone
Procedia PDF Downloads 177258 Sedimentological and Geochemical Characteristics of Aeolian Sediments and Their Implication for Sand Origin in the Yarlung Zangbo River Valley, Southern Qinghai-Tibetan Plateau
Authors: Na Zhou, Chun-Lai Zhang, Qing Li, Bingqi Zhu, Xun-Ming Wang
Abstract:
The understanding of the dynamics of aeolian sand in the Yarlung Zangbo River Valley (YLZBV), southern Qinghai-Tibetan Plateau, including its origins, transportation,and deposition, remains preliminary. In this study, we investigated the extensive origin of aeolian sediments in the YLZBV by analyzing the distribution and composition of sediment’s grain size and geochemical composition in dune sediments collected from the wide river terraces. The major purpose is to characterize the sedimentological and geochemical compositions of these aeolian sediments, trace back to their sources, and understand their influencing factors. As a result, the grain size and geochemistry variations, which showed a significant correlation between grain sizes distribution and element abundances, give a strong evidence that the important part of the aeolian sediments in the downstream areas was firstly derived from the upper reaches by intense fluvial processes. However, the sediments experienced significant mixing process with local inputs and reconstructed by regional wind transportation. The diverse compositions and tight associations in the major and trace element geochemistry between the up- and down-stream aeolian sediments and the local detrital rocks, which were collected from the surrounding mountains, suggest that the upstream aeolian sediments had originated from the various close-range rock types, and experienced intensive mixing processes via aeolian- fluvial dynamics. Sand mass transported by water and wind was roughly estimated to qualify the interplay between the aeolian and fluvial processes controlling the sediment transport, yield, and ultimately shaping the aeolian landforms in the mainstream of the YLZBV.Keywords: grain size distribution, geochemistry, wind and water load, sand source, Yarlung Zangbo River Valley
Procedia PDF Downloads 97257 The Role of Situational Factors in User Experience during Human-Robot Interaction
Authors: Da Tao, Tieyan Wang, Mingfu Qin
Abstract:
While social robots have been increasingly developed and rapidly applied in our daily life, how robots should interact with humans is still an urgent problem to be explored. Appropriate use of interactive behavior is likely to create a good user experience in human-robot interaction situations, which in turn can improve people’s acceptance of robots. This paper aimed to systematically and quantitatively examine the effects of several important situational factors (i.e., interaction distance, interaction posture, and feedback style) on user experience during human-robot interaction. A three-factor mixed designed experiment was adopted in this study, where subjects were asked to interact with a social robot in different interaction situations by combinations of varied interaction distance, interaction posture, and feedback style. A set of data on users’ behavioral performance, subjective perceptions, and eye movement measures were tracked and collected, and analyzed by repeated measures analysis of variance. The results showed that the three situational factors showed no effects on behavioral performance in tasks during human-robot interaction. Interaction distance and feedback style yielded significant main effects and interaction effects on the proportion of fixation times. The proportion of fixation times on the robot is higher for negative feedback compared with positive feedback style. While the proportion of fixation times on the robot generally decreased with the increase of the interaction distance, it decreased more under the positive feedback style than under the negative feedback style. In addition, there were significant interaction effects on pupil diameter between interaction distance and posture. As interaction distance increased, mean pupil diameter became smaller in side interaction, while it became larger in frontal interaction. Moreover, the three situation factors had significant interaction effects on user acceptance of the interaction mode. The findings are helpful in the underlying mechanism of user experience in human-robot interaction situations and provide important implications for the design of robot behavioral expression and for optimal strategies to improve user experience during human-robot interaction.Keywords: social robots, human-robot interaction, interaction posture, interaction distance, feedback style, user experience
Procedia PDF Downloads 133256 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 276255 Bodily Liberation and Spiritual Redemption of Black Women in Beloved: From the Perspective of Ecofeminism
Authors: Wang Huiwen
Abstract:
Since its release, Toni Morrison's novel Beloved has garnered significant international recognition, and its adaptation of a historical account has profoundly affected readers and scholars, evoking a visceral understanding of the suffering endured by black slaves. The ecofeminist approach has garnered more attention in recent times. The emergence of ecofeminism may be attributed to the feminist movement, which has subsequently evolved into several branches, including cultural ecofeminism, social ecofeminism, and socialist ecofeminism, each of which is developing its own specific characteristics. The many branches hold differing perspectives, yet they all converge on a key principle: the interconnectedness between the subjugation of women and the exploitation of nature can be traced back to a common underlying cognitive framework. Scholarly investigations into the novel Beloved have primarily centered on the cultural interpretations around the emancipation of African American women, with a predominant lens rooted in cultural ecofeminism. This thesis aims to analyze Morrison's feminist beliefs in the novel Beloved by integrating socialist and cultural ecofeminist perspectives, which seeks to challenge the limitations of essentialism within ecofeminism while also proposing a strategy to address exploitation and dismantle oppressive structures depicted in Beloved. This thesis examines the white patriarchal oppression system underlying the relationships between men and women, blacks and whites, and man and nature as shown in the novel. What the black women have been deprived of compared with the black men, white women and white men is a main clue of this research, while nature is a key complement of each chapter for their loss. The attainment of spiritual redemption and ultimate freedom is contingent upon the social revolution that enables bodily emancipation, both of which are indispensable for black women. The weighty historical pains, traumatic recollections, and compromised sense of self prompted African slaves to embark on a quest for personal redemption. The restoration of the bond between black men and women, as well as the relationship between black individuals and nature, is a clear and undeniable pathway towards the final freedom of black women in the novel Beloved.Keywords: beloved, ecofeminism, black women, nature, essentialism
Procedia PDF Downloads 66254 Differential Impacts of Whole-Growth-Duration Warming on the Grain Yield and Quality between Early and Late Rice
Authors: Shan Huang, Guanjun Huang, Yongjun Zeng, Haiyuan Wang
Abstract:
The impacts of whole-growth warming on grain yield and quality in double rice cropping systems still remain largely unknown. In this study, a two-year field whole-growth warming experiment was conducted with two inbred indica rice cultivars (Zhongjiazao 17 and Xiangzaoxian 45) for early season and two hybrid indica rice cultivars (Wanxiangyouhuazhan and Tianyouhuazhan) for late season. The results showed that whole-growth warming did not affect early rice yield but significantly decreased late rice yield, which was caused by the decreased grain weight that may be related to the increased plant respiration and reduced translocation of dry matter accumulated during the pre-heading phase under warming. Whole-growth warming improved the milling quality of late rice but decreased that of early rice; however, the chalky rice rate and chalkiness degree were increased by 20.7% and 33.9% for early rice and 37.6 % and 51.6% for late rice under warming, respectively. We found that the crude protein content of milled rice was significantly increased by warming in both early and late rice, which would result in deterioration of eating quality. Besides, compared with the control treatment, the setback of late rice was significantly reduced by 17.8 % under warming, while that of early rice was not significantly affected by warming. These results suggest that the negative impacts of whole-growth warming on grain quality may be more severe in early rice than in late rice. Therefore, adaptation in both rice breeding and agronomic practices is needed to alleviate climate warming on the production of a double rice cropping system. Climate-smart agricultural practices ought to be implemented to mitigate the detrimental effects of warming on rice grain quality. For instance, fine-tuning the application rate and timing of inorganic nitrogen fertilizers, along with the introduction of organic amendments and the cultivation of heat-tolerant rice varieties, can help reduce the negative impact of rising temperatures on rice quality. Furthermore, to comprehensively understand the influence of climate warming on rice grain quality, future research should encompass a wider range of rice cultivars and experimental sites.Keywords: climate warming, double rice cropping, dry matter, grain quality, grain yield
Procedia PDF Downloads 44253 Insight into Localized Fertilizer Placement in Major Cereal Crops
Authors: Solomon Yokamo, Dianjun Lu, Xiaoqin Chen, Huoyan Wang
Abstract:
The current ‘high input-high output’ nutrient management model based on homogenous spreading over the entire soil surface remains a key challenge in China’s farming systems, leading to low fertilizer use efficiency and environmental pollution. Localized placement of fertilizer (LPF) to crop root zones has been proposed as a viable approach to boost crop production while protecting environmental pollution. To assess the potential benefits of LPF on three major crops—wheat, rice, and maize—a comprehensive meta-analysis was conducted, encompassing 85 field studies published from 2002-2023. We further validated the practicability and feasibility of one-time root zone N management based on LPF for the three field crops. The meta-analysis revealed that LPF significantly increased the yields of the selected crops (13.62%) and nitrogen recovery efficiency (REN) (33.09%) while reducing cumulative nitrous oxide (N₂O) emission (17.37%) and ammonia (NH₃) volatilization (60.14%) compared to the conventional surface application (CSA). Higher grain yield and REN were achieved with an optimal fertilization depth (FD) of 5-15 cm, moderate N rates, combined NPK application, one-time deep fertilization, and coarse-textured and slightly acidic soils. Field validation experiments showed that localized one-time root zone N management without topdressing increased maize (6.2%), rice (34.6%), and wheat (2.9%) yields while saving N fertilizer (3%) and also increased the net economic benefits (23.71%) compared to CSA. A soil incubation study further proved the potential of LPF to enhance the retention and availability of mineral N in the root zone over an extended period. Thus, LPF could be an important fertilizer management strategy and should be extended to other less-developed and developing regions to win the triple benefit of food security, environmental quality, and economic gains.Keywords: grain yield, LPF, NH₃ volatilization, N₂O emission, N recovery efficiency
Procedia PDF Downloads 20252 Synthesis of Magnetic Plastic Waste-Reduced Graphene Oxide Composite and Its Application in Dye Adsorption from Aqueous Solution
Authors: Pamphile Ndagijimana, Xuejiao Liu, Zhiwei Li, Yin Wang
Abstract:
The valorization of plastic wastes, as a mitigation strategy, is attracting the researchers’ attention since these wastes have raised serious environmental concerns. Plastic wastes have been reported to adsorb the organic pollutants in the water environment and to be the main vector of those pollutants in the aquatic environment, especially dyes, as a serious water pollution concern. Recycling technologies of plastic wastes such as landfills, incineration, and energy recovery have been adopted to manage those wastes before getting exposed to the environment. However, they are far from being widely accepted due to their related environmental pollution, lack of space for the landfill as well as high cost. Therefore, modification is necessary for green plastic adsorbent in water applications. Current routes for plastic modification into adsorbents are based on the combustion method, but they have weaknesses of air pollution as well as high cost. Thus, the green strategy for plastic modification into adsorbents is highly required. Furthermore, recent researchers recommended that if plastic wastes are combined with other solid carbon materials, they could promote their application in water treatment. Herein, we present new insight into using plastic waste-based materials as future green adsorbents. Magnetic plastic-reduced graphene oxide (MPrGO) composite was synthesized by cross-linking method and applied in removing methylene blue (MB) from an aqueous solution. Furthermore, the following advantages have been achieved: (i) The density of plastic and reduced graphene oxide were enhanced, (ii) no second pollution of black color in solution, (iii) small amount of graphene oxide (1%) was linked on 10g of plastic waste, and the composite presented the high removal efficiency, (iv) easy recovery of adsorbent from water. The low concentration of MB (10-30mg/L) was all removed by 0.3g of MPrGO. Different characterization techniques such as XRD, SEM, FTIR, BET, XPS, and Raman spectroscopy were performed, and the results confirmed a conjugation between plastic waste and graphene oxide. This MPrGO composite presented a good prospect for the valorization of plastic waste, and it is a promising composite material in water treatment.Keywords: plastic waste, graphene oxide, dye, adsorption
Procedia PDF Downloads 91251 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry
Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu
Abstract:
The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS
Procedia PDF Downloads 282250 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation
Procedia PDF Downloads 139249 Practice and Understanding of Fracturing Renovation for Risk Exploration Wells in Xujiahe Formation Tight Sandstone Gas Reservoir
Authors: Fengxia Li, Lufeng Zhang, Haibo Wang
Abstract:
The tight sandstone gas reservoir in the Xujiahe Formation of the Sichuan Basin has huge reserves, but its utilization rate is low. Fracturing and stimulation are indispensable technologies to unlock their potential and achieve commercial exploitation. Slickwater is the most widely used fracturing fluid system in the fracturing and renovation of tight reservoirs. However, its viscosity is low, its sand-carrying performance is poor, and the risk of sand blockage is high. Increasing the sand carrying capacity by increasing the displacement will increase the frictional resistance of the pipe string, affecting the resistance reduction performance. The variable viscosity slickwater can flexibly switch between different viscosities in real-time online, effectively overcoming problems such as sand carrying and resistance reduction. Based on a self-developed indoor loop friction testing system, a visualization device for proppant transport, and a HAAKE MARS III rheometer, a comprehensive evaluation was conducted on the performance of variable viscosity slickwater, including resistance reduction, rheology, and sand carrying. The indoor experimental results show that: 1. by changing the concentration of drag-reducing agents, the viscosity of the slippery water can be changed between 2~30mPa. s; 2. the drag reduction rate of the variable viscosity slickwater is above 80%, and the shear rate will not reduce the drag reduction rate of the liquid; under indoor experimental conditions, 15mPa. s of variable viscosity and slickwater can basically achieve effective carrying and uniform placement of proppant. The layered fracturing effect of the JiangX well in the dense sandstone of the Xujiahe Formation shows that the drag reduction rate of the variable viscosity slickwater is 80.42%, and the daily production of the single layer after fracturing is over 50000 cubic meters. This study provides theoretical support and on-site experience for promoting the application of variable viscosity slickwater in tight sandstone gas reservoirs.Keywords: slickwater, hydraulic fracturing, dynamic sand laying, drag reduction rate, rheological properties
Procedia PDF Downloads 76248 Application of Multilayer Perceptron and Markov Chain Analysis Based Hybrid-Approach for Predicting and Monitoring the Pattern of LULC Using Random Forest Classification in Jhelum District, Punjab, Pakistan
Authors: Basit Aftab, Zhichao Wang, Feng Zhongke
Abstract:
Land Use and Land Cover Change (LULCC) is a critical environmental issue that has significant effects on biodiversity, ecosystem services, and climate change. This study examines the spatiotemporal dynamics of land use and land cover (LULC) across a three-decade period (1992–2022) in a district area. The goal is to support sustainable land management and urban planning by utilizing the combination of remote sensing, GIS data, and observations from Landsat satellites 5 and 8 to provide precise predictions of the trajectory of urban sprawl. In order to forecast the LULCC patterns, this study suggests a hybrid strategy that combines the Random Forest method with Multilayer Perceptron (MLP) and Markov Chain analysis. To predict the dynamics of LULC change for the year 2035, a hybrid technique based on multilayer Perceptron and Markov Chain Model Analysis (MLP-MCA) was employed. The area of developed land has increased significantly, while the amount of bare land, vegetation, and forest cover have all decreased. This is because the principal land types have changed due to population growth and economic expansion. The study also discovered that between 1998 and 2023, the built-up area increased by 468 km² as a result of the replacement of natural resources. It is estimated that 25.04% of the study area's urbanization will be increased by 2035. The performance of the model was confirmed with an overall accuracy of 90% and a kappa coefficient of around 0.89. It is important to use advanced predictive models to guide sustainable urban development strategies. It provides valuable insights for policymakers, land managers, and researchers to support sustainable land use planning, conservation efforts, and climate change mitigation strategies.Keywords: land use land cover, Markov chain model, multi-layer perceptron, random forest, sustainable land, remote sensing.
Procedia PDF Downloads 35247 A Comparative Study of Mechanisms across Different Online Social Learning Types
Authors: Xinyu Wang
Abstract:
In the context of the rapid development of Internet technology and the increasing prevalence of online social media, this study investigates the impact of digital communication on social learning. Through three behavioral experiments, we explore both affective and cognitive social learning in online environments. Experiment 1 manipulates the content of experimental materials and two forms of feedback, emotional valence, sociability, and repetition, to verify whether individuals can achieve online emotional social learning through reinforcement using two social learning strategies. Results reveal that both social learning strategies can assist individuals in affective, social learning through reinforcement, with feedback-based learning strategies outperforming frequency-dependent strategies. Experiment 2 similarly manipulates the content of experimental materials and two forms of feedback to verify whether individuals can achieve online knowledge social learning through reinforcement using two social learning strategies. Results show that similar to online affective social learning, individuals adopt both social learning strategies to achieve cognitive social learning through reinforcement, with feedback-based learning strategies outperforming frequency-dependent strategies. Experiment 3 simultaneously observes online affective and cognitive social learning by manipulating the content of experimental materials and feedback at different levels of social pressure. Results indicate that online affective social learning exhibits different learning effects under different levels of social pressure, whereas online cognitive social learning remains unaffected by social pressure, demonstrating more stable learning effects. Additionally, to explore the sustained effects of online social learning and differences in duration among different types of online social learning, all three experiments incorporate two test time points. Results reveal significant differences in pre-post-test scores for online social learning in Experiments 2 and 3, whereas differences are less apparent in Experiment 1. To accurately measure the sustained effects of online social learning, the researchers conducted a mini-meta-analysis of all effect sizes of online social learning duration. Results indicate that although the overall effect size is small, the effect of online social learning weakens over time.Keywords: online social learning, affective social learning, cognitive social learning, social learning strategies, social reinforcement, social pressure, duration
Procedia PDF Downloads 49246 Automatic Content Curation of Visual Heritage
Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz
Abstract:
Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research
Procedia PDF Downloads 186245 Big Data Analysis on the Development of Jinan’s Consumption Centers under the Influence of E-Commerce
Authors: Hang Wang, Xiaoming Gao
Abstract:
The rapid development of e-commerce has significantly transformed consumer behavior and urban consumption patterns worldwide. This study explores the impact of e-commerce on the development and spatial distribution of consumption centers, with a particular focus on Jinan City, China. Traditionally, urban consumption centers are defined by physical commercial spaces, such as shopping malls and markets. However, the rise of e-commerce has introduced a shift towards virtual consumption hubs, with a corresponding impact on physical retail locations. Utilizing Gaode POI (Point of Interest) data, this research aims to provide a comprehensive analysis of the spatial distribution of consumption centers in Jinan, comparing e-commerce-driven virtual consumption hubs with traditional physical consumption centers. The study methodology involves gathering and analyzing POI data, focusing on logistics distribution for e-commerce activities and mobile charging point locations to represent offline consumption behavior. A spatial clustering technique is applied to examine the concentration of commercial activities and to identify emerging trends in consumption patterns. The findings reveal a clear differentiation between e-commerce and physical consumption centers in Jinan. E-commerce activities are dispersed across a wider geographic area, correlating closely with residential zones and logistics centers, while traditional consumption hubs remain concentrated around historical and commercial areas such as Honglou and the old city center. Additionally, the research identifies an ongoing transition within Jinan’s consumption landscape, with online and offline retail coexisting, though at different spatial and functional levels. This study contributes to urban planning by providing insights into how e-commerce is reshaping consumption behaviors and spatial structures in cities like Jinan. By leveraging big data analytics, the research offers a valuable tool for urban designers and planners to adapt to the evolving demands of digital commerce and to optimize the spatial layout of city infrastructure to better serve the needs of modern consumers.Keywords: big data, consumption centers, e-commerce, urban planning, jinan
Procedia PDF Downloads 26244 In situ Investigation of PbI₂ Precursor Film Formation and Its Subsequent Conversion to Mixed Cation Perovskite
Authors: Dounya Barrit, Ming-Chun Tang, Hoang Dang, Kai Wang, Detlef-M. Smilgies, Aram Amassian
Abstract:
Several deposition methods have been developed for perovskite film preparation. The one-step spin-coating process has emerged as a more popular option thanks to its ability to produce films of different compositions, including mixed cation and mixed halide perovskites, which can stabilize the perovskite phase and produce phases with desired band gap. The two-step method, however, is not understood in great detail. There is a significant need and opportunity to adopt the two-step process toward mixed cation and mixed halide perovskites, but this requires deeper understanding of the two-step conversion process, for instance when using different cations and mixtures thereof, to produce high-quality perovskite films with uniform composition. In this work, we demonstrate using in situ investigations that the conversion of PbI₂ to perovskite is largely dictated by the state of the PbI₂ precursor film in terms of its solvated state. Using time-resolved grazing incidence wide-angle X-Ray scattering (GIWAXS) measurements during spin coating of PbI₂ from a DMF (Dimethylformamide) solution we show the film formation to be a sol-gel process involving three PbI₂-DMF solvate complexes: disordered precursor (P₀), ordered precursor (P₁, P₂) prior to PbI₂ formation at room temperature after 5 minutes. The ordered solvates are highly metastable and eventually disappear, but we show that performing conversion from P₀, P₁, P₂ or PbI₂ can lead to very different conversion behaviors and outcomes. We compare conversion behaviors by using MAI (Methylammonium iodide), FAI (Formamidinium Iodide) and mixtures of these cations, and show that conversion can occur spontaneously and quite rapidly at room temperature without requiring further thermal annealing. We confirm this by demonstrating improvements in the morphology and microstructure of the resulting perovskite films, using techniques such as in situ quartz crystal microbalance with dissipation monitoring, SEM and XRD.Keywords: in situ GIWAXS, lead iodide, mixed cation, perovskite solar cell, sol-gel process, solvate phase
Procedia PDF Downloads 148243 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China
Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han
Abstract:
Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.Keywords: education policy-making, educational research, educational think tank, higher institution
Procedia PDF Downloads 159242 Experimental Study on the Heating Characteristics of Transcritical CO₂ Heat Pumps
Authors: Lingxiao Yang, Xin Wang, Bo Xu, Zhenqian Chen
Abstract:
Due to its outstanding environmental performance, higher heating temperature and excellent low-temperature performance, transcritical carbon dioxide (CO₂) heat pumps are receiving more and more attention. However, improperly set operating parameters have a serious negative impact on the performance of the transcritical CO₂ heat pump due to the properties of CO₂. In this study, the heat transfer characteristics of the gas cooler are studied based on the modified “three-stage” gas cooler, then the effect of three operating parameters, compressor speed, gas cooler water-inlet flowrate and gas cooler water-inlet temperature, on the heating process of the system are investigated from the perspective of thermal quality and heat capacity. The results shows that: In the heat transfer process of gas cooler, the temperature distribution of CO₂ and water shows a typical “two region” and “three zone” pattern; The rise in the cooling pressure of CO₂ serves to increase the thermal quality on the CO₂ side of the gas cooler, which in turn improves the heating temperature of the system; Nevertheless, the elevated thermal quality on the CO₂ side can exacerbate the mismatch of heat capacity on both sides of the gas cooler, thereby adversely affecting the system coefficient of performance (COP); Furthermore, increasing compressor speed mitigates the mismatch in heat capacity caused by elevated thermal quality, which is exacerbated by decreasing gas cooler water-inlet flowrate and rising gas cooler water-inlet temperature; As a delegate, the varying compressor speed results in a 7.1°C increase in heating temperature within the experimental range, accompanied by a 10.01% decrease in COP and an 11.36% increase in heating capacity. This study can not only provide an important reference for the theoretical analysis and control strategy of the transcritical CO₂ heat pump, but also guide the related simulation and the design of the gas cooler. However, the range of experimental parameters in the current study is small and the conclusions drawn are not further analysed quantitatively. Therefore, expanding the range of parameters studied and proposing corresponding quantitative conclusions and indicators with universal applicability could greatly increase the practical applicability of this study. This is also the goal of our next research.Keywords: transcritical CO₂ heat pump, gas cooler, heat capacity, thermal quality
Procedia PDF Downloads 24241 The Fabrication and Characterization of a Honeycomb Ceramic Electric Heater with a Conductive Coating
Authors: Siming Wang, Qing Ni, Yu Wu, Ruihai Xu, Hong Ye
Abstract:
Porous electric heaters, compared to conventional electric heaters, exhibit excellent heating performance due to their large specific surface area. Porous electric heaters employ porous metallic materials or conductive porous ceramics as the heating element. The former attains a low heating power with a fixed current due to the low electrical resistivity of metal. Although the latter can bypass the inherent challenges of porous metallic materials, the fabrication process of the conductive porous ceramics is complicated and high cost. This work proposed a porous ceramic electric heater with dielectric honeycomb ceramic as a substrate and surface conductive coating as a heating element. The conductive coating was prepared by the sol-gel method using silica sol and methyl trimethoxysilane as raw materials and graphite powder as conductive fillers. The conductive mechanism and degradation reason of the conductive coating was studied by electrical resistivity and thermal stability analysis. The heating performance of the proposed heater was experimentally investigated by heating air and deionized water. The results indicate that the electron transfer is achieved by forming the conductive network through the contact of the graphite flakes. With 30 wt% of graphite, the electrical resistivity of the conductive coating can be as low as 0.88 Ω∙cm. The conductive coating exhibits good electrical stability up to 500°C but degrades beyond 600°C due to the formation of many cracks in the coating caused by the weight loss and thermal expansion. The results also show that the working medium has a great influence on the volume power density of the heater. With air under natural convection as the working medium, the volume power density attains 640.85 kW/m3, which can be increased by 5 times when using deionized water as the working medium. The proposed honeycomb ceramic electric heater has the advantages of the simple fabrication method, low cost, and high volume power density, demonstrating great potential in the fluid heating field.Keywords: conductive coating, honeycomb ceramic electric heater, high specific surface area, high volume power density
Procedia PDF Downloads 154240 Unequal Contributions of Parental Isolates in Somatic Recombination of the Stripe Rust Fungus
Authors: Xianming Chen, Yu Lei, Meinan Wang
Abstract:
The dikaryotic basidiomycete fungus, Puccinia striiformis, causes stripe rust, one of the most important diseases of wheat and barley worldwide. The pathogen is largely reproduced asexually, and asexual recombination has been hypothesized to be one of the mechanisms for the pathogen variations. To test the hypothesis and understand the genetic process of asexual recombination, somatic recombinant isolates were obtained under controlled conditions by inoculating susceptible host plants with a mixture of equal quantity of urediniospores of isolates with different virulence patterns and selecting through a series of inoculation on host plants with different genes for resistance to one of the parental isolates. The potential recombinant isolates were phenotypically characterized by virulence testing on the set of 18 wheat lines used to differentiate races of the wheat stripe rust pathogen, P. striiformis f. sp. tritici (Pst), for the combinations of Pst isolates; or on both sets of the wheat differentials and 12 barley differentials for identifying races of the barley stripe rust pathogen, P. striiformis f. sp. hordei (Psh) for combinations of a Pst isolate and a Psh isolate. The progeny and parental isolates were also genotypically characterized with 51 simple sequence repeat and 90 single-nucleotide polymorphism markers. From nine combinations of parental isolates, 68 potential recombinant isolates were obtained, of which 33 (48.5%) had similar virulence patterns to one of the parental isolates, and 35 (51.5%) had virulence patterns distinct from either of the parental isolates. Of the 35 isolates of distinct virulence patterns, 11 were identified as races that had been previously detected from natural collections and 24 were identified as new races. The molecular marker data confirmed 66 of the 68 isolates as recombinants. The percentages of parental marker alleles ranged from 0.9% to 98.9% and were significantly different from equal proportions in the recombinant isolates. Except for a couple of combinations, the greater or less contribution was not specific to any particular parental isolates as the same parental isolates contributed more to some of the progeny isolates but less to the other progeny isolates in the same combination. The unequal contributions by parental isolates appear to be a general role in somatic recombination for the stripe rust fungus, which may be used to distinguish asexual recombination from sexual recombination in studying the evolutionary mechanisms of the highly variable fungal pathogen.Keywords: molecular markers, Puccinia striiformis, somatic recombination, stripe rust
Procedia PDF Downloads 244239 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives
Authors: Chao Wang
Abstract:
Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.Keywords: sebum layer, topical and similarity, interactive technology, image narrative
Procedia PDF Downloads 389238 Potential Opportunity and Challenge of Developing Organic Rankine Cycle Geothermal Power Plant in China Based on an Energy-Economic Model
Authors: Jiachen Wang, Dongxu Ji
Abstract:
Geothermal power generation is a mature technology with zero carbon emission and stable power output, which could play a vital role as an optimum substitution of base load technology in China’s future decarbonization society. However, the development of geothermal power plants in China is stagnated for a decade due to the underestimation of geothermal energy and insufficient favoring policy. Lack of understanding of the potential value of base-load technology and environmental benefits is the critical reason for disappointed policy support. This paper proposed a different energy-economic model to uncover the potential benefit of developing a geothermal power plant in Puer, including the value of base-load power generation, and environmental and economic benefits. Optimization of the Organic Rankine Cycle (ORC) for maximum power output and minimum Levelized cost of electricity was first conducted. This process aimed at finding the optimum working fluid, turbine inlet pressure, pinch point temperature difference and superheat degrees. Then the optimal ORC model was sent to the energy-economic model to simulate the potential economic and environmental benefits. Impact of geothermal power plants based on the scenarios of implementing carbon trade market, the direct subsidy per electricity generation and nothing was tested. In addition, a requirement of geothermal reservoirs, including geothermal temperature and mass flow rate for a competitive power generation technology with other renewables, was listed. The result indicated that the ORC power plant has a significant economic and environmental benefit over other renewable power generation technologies when implementing carbon trading market and subsidy support. At the same time, developers must locate the geothermal reservoirs with minimum temperature and mass flow rate of 130 degrees and 50 m/s to guarantee a profitable project under nothing scenarios.Keywords: geothermal power generation, optimization, energy model, thermodynamics
Procedia PDF Downloads 68237 Bayesian Networks Scoping the Climate Change Impact on Winter Wheat Freezing Injury Disasters in Hebei Province, China
Authors: Xiping Wang,Shuran Yao, Liqin Dai
Abstract:
Many studies report the winter is getting warmer and the minimum air temperature is obviously rising as the important climate warming evidences. The exacerbated air temperature fluctuation tending to bring more severe weather variation is another important consequence of recent climate change which induced more disasters to crop growth in quite a certain regions. Hebei Province is an important winter wheat growing province in North of China that recently endures more winter freezing injury influencing the local winter wheat crop management. A winter wheat freezing injury assessment Bayesian Network framework was established for the objectives of estimating, assessing and predicting winter wheat freezing disasters in Hebei Province. In this framework, the freezing disasters was classified as three severity degrees (SI) among all the three types of freezing, i.e., freezing caused by severe cold in anytime in the winter, long extremely cold duration in the winter and freeze-after-thaw in early season after winter. The factors influencing winter wheat freezing SI include time of freezing occurrence, growth status of seedlings, soil moisture, winter wheat variety, the longitude of target region and, the most variable climate factors. The climate factors included in this framework are daily mean and range of air temperature, extreme minimum temperature and number of days during a severe cold weather process, the number of days with the temperature lower than the critical temperature values, accumulated negative temperature in a potential freezing event. The Bayesian Network model was evaluated using actual weather data and crop records at selected sites in Hebei Province using real data. With the multi-stage influences from the various factors, the forecast and assessment of the event-based target variables, freezing injury occurrence and its damage to winter wheat production, were shown better scoped by Bayesian Network model.Keywords: bayesian networks, climatic change, freezing Injury, winter wheat
Procedia PDF Downloads 410236 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 106