Search results for: Peng Zhou
62 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase
Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He
Abstract:
Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification
Procedia PDF Downloads 30961 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 2060 Being an English Language Teaching Assistant in China: Understanding the Identity Evolution of Early-Career English Teacher in Private Tutoring Schools
Authors: Zhou Congling
Abstract:
The integration of private tutoring has emerged as an indispensable facet in the acquisition of language proficiency beyond formal educational settings. Notably, there has been a discernible surge in the demand for private English tutoring, specifically geared towards the preparation for internationally recognized gatekeeping examinations, such as IELTS, TOEFL, GMAT, and GRE. This trajectory has engendered an escalating need for English Language Teaching Assistants (ELTAs) operating within the realm of Private Tutoring Schools (PTSs). The objective of this study is to unravel the intricate process by which these ELTAs formulate their professional identities in the nascent stages of their careers as English educators, as well as to delineate their perceptions regarding their professional trajectories. The construct of language teacher identity is inherently multifaceted, shaped by an amalgamation of individual, societal, and cultural determinants, exerting a profound influence on how language educators navigate their professional responsibilities. This investigation seeks to scrutinize the experiential and influential factors that mold the identities of ELTAs in PTSs, particularly post the culmination of their language-oriented academic programs. Employing a qualitative narrative inquiry approach, this study aims to delve into the nuanced understanding of how ELTAs conceptualize their professional identities and envision their future roles. The research methodology involves purposeful sampling and the conduct of in-depth, semi-structured interviews with ten participants. Data analysis will be conducted utilizing Barkhuizen’s Short Story Analysis, a method designed to explore a three-dimensional narrative space, elucidating the intricate interplay of personal experiences and societal contexts in shaping the identities of ELTAs. The anticipated outcomes of this study are poised to contribute substantively to a holistic comprehension of ELTA identity formation, holding practical implications for diverse stakeholders within the private tutoring sector. This research endeavors to furnish insights into strategies for the retention of ELTAs and the enhancement of overall service quality within PTSs.Keywords: China, English language teacher, narrative inquiry, private tutoring school, teacher identity
Procedia PDF Downloads 5459 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 7258 Encouraging Collaboration and Innovation: The New Engineering Oriented Educational Reform in Urban Planning, Tianjin University, China
Authors: Tianjie Zhang, Bingqian Cheng, Peng Zeng
Abstract:
Engineering science and technology progress and innovation have become an important engine to promote social development. The reform exploration of "new engineering" in China has drawn extensive attention around the world, with its connotation as "to cultivate future diversified, innovative and outstanding engineering talents by taking ‘fostering character and civic virtue’ as the guide, responding to changes and shaping the future as the construction concept, and inheritance and innovation, crossover and fusion, coordination and sharing as the principal approach". In this context, Tianjin University, as a traditional Chinese university with advantages in engineering, further launched the CCII (Coherent-Collaborative-Interdisciplinary-Innovation) program, raising the cultivation idea of integrating new liberal arts education, multidisciplinary engineering education and personalized professional education. As urban planning practice in China has undergone the evolution of "physical planning -- comprehensive strategic planning -- resource management-oriented planning", planning education has also experienced the transmutation process of "building foundation -- urban scientific foundation -- multi-disciplinary integration". As a characteristic and advantageous discipline of Tianjin University, the major of Urban and Rural Planning, in accordance with the "CCII Program of Tianjin University", aims to build China's top and world-class major, and implements the following educational reform measures: 1. Adding corresponding English courses, such as advanced course on GIS Analysis, courses on comparative studies in international planning involving ecological resources and the sociology of the humanities, etc. 2. Holding "Academician Forum", inviting international academicians to give lectures or seminars to track international frontier scientific research issues. 3. Organizing "International Joint Workshop" to provide students with international exchange and design practice platform. 4. Setting up a business practice base, so that students can find problems from practice and solve them in an innovative way. Through these measures, the Urban and Rural Planning major of Tianjin University has formed a talent training system with multi-disciplinary cross integration and orienting to the future science and technology.Keywords: China, higher education reform, innovation, new engineering education, rural and urban planning, Tianjin University
Procedia PDF Downloads 12057 The Relationship Between Sleep Characteristics and Cognitive Impairment in Patients with Alzheimer’s Disease
Authors: Peng Guo
Abstract:
Objective: This study investigates the clinical characteristics of sleep disorders (SD) in patients with Alzheimer's disease (AD) and their relationship with cognitive impairment. Methods: According to the inclusion and exclusion criteria of AD, 460 AD patients were consecutively included in Beijing Tiantan Hospital from January 2016 to April 2022. Demographic data, including gender, age, age of onset, course of disease, years of education and body mass index, were collected. The Pittsburgh sleep quality index (PSQI) scale was used to evaluate the overall sleep status. AD patients with PSQI ≥7 was divided into AD with SD (AD-SD) group, and those with PSQI < 7 were divided into AD with no SD (AD-nSD) group. The overall cognitive function of AD patients was evaluated by the scales of Mini-mental state examination (MMSE) and Montreal cognitive assessment (MoCA), memory was evaluated by the AVLT-immediate recall, AVLT-delayed recall and CFT-delayed memory scales, the language was evaluated by BNT scale, visuospatial ability was evaluated by CFT-imitation, executive function was evaluated by Stroop-A, Stroop-B and Stroop-C scales, attention was evaluated by TMT-A, TMT-B, and SDMT scales. The correlation between cognitive function and PSQI score in AD-SD group was analyzed. Results: Among the 460 AD patients, 173 cases (37.61%) had SD. There was no significant difference in gender, age, age of onset, course of disease, years of education and body mass index between AD-SD and AD-nSD groups (P>0.05). The factors with significant difference in PSQI scale between AD-SD and AD-nSD groups include sleep quality, sleep latency, sleep duration, sleep efficiency, sleep disturbance, use of sleeping medication and daytime dysfunction (P<0.05). Compared with AD-nSD group, the total scores of MMSE, MoCA, AVLT-immediate recall and CFT-imitation scales in AD-SD group were significantly lower(P<0.01,P<0.01,P<0.01,P<0.05). In AD-SD group, subjective sleep quality was significantly and negatively correlated with the scores of MMSE, MoCA, AVLT-immediate recall and CFT-imitation scales (r=-0.277,P=0.000; r=-0.216,P=0.004; r=-0.253,P=0.001; r=-0.239, P=0.004), daytime dysfunction was significantly and negatively correlated with the score of AVLT-immediate recall scale (r=-0.160,P=0.043). Conclusion The incidence of AD-SD is 37.61%. AD-SD patients have worse subjective sleep quality, longer time to fall asleep, shorter sleep time, lower sleep efficiency, severer nighttime SD, more use of sleep medicine, and severer daytime dysfunction. The overall cognitive function, immediate recall and visuospatial ability of AD-SD patients are significantly impaired and are closely correlated with the decline of subjective sleep quality. The impairment of immediate recall is highly correlated with daytime dysfunction in AD-SD patients.Keywords: Alzheimer's disease, sleep disorders, cognitive impairment, correlation
Procedia PDF Downloads 3056 Large-Scale Experimental and Numerical Studies on the Temperature Response of Main Cables and Suspenders in Bridge Fires
Authors: Shaokun Ge, Bart Merci, Fubao Zhou, Gao Liu, Ya Ni
Abstract:
This study investigates the thermal response of main cables and suspenders in suspension bridges subjected to vehicle fires, integrating large-scale gasoline pool fire experiments with numerical simulations. Focusing on a suspension bridge in China, the research examines the impact of wind speed, pool size, and lane position on flame dynamics and temperature distribution along the cables. The results indicate that higher wind speeds and larger pool sizes markedly increase the mass burning rate, causing flame deflection and non-uniform temperature distribution along the cables. Under a wind speed of 1.56 m/s, maximum temperatures reached approximately 960 ℃ near the base in emergency lane fires and 909 ℃ at 1.6 m height for slow lane fires, underscoring the heightened thermal risk from emergency lane fires. The study recommends a zoning strategy for cable fire protection, suggesting a 0-12.8 m protection zone with a target temperature of 1000 ℃ and a 12.8-20.8 m zone with a target temperature of 700 ℃, both with a 90-minute fire resistance. This approach, based on precise temperature distribution data from experimental and simulation results, provides a vital reference for the fire protection design of suspension bridge cables. Understanding cable temperature response during vehicle fires is crucial for developing fire protection systems, as it dictates necessary structural protection, fire resistance duration, and maximum temperatures for mitigation. Challenges of controlling environmental wind in large-scale fire tests are also addressed, along with a call for further research on fire behavior mechanisms and structural temperature response in cable-supported bridges under varying wind conditions. Conclusively, the proposed zoning strategy enhances the theoretical understanding of near-field temperature response in bridge fires, contributing significantly to the field by supporting the design of passive fire protection systems for bridge cables, safeguarding their integrity under extreme fire conditions.Keywords: bridge fire, temperature response, large-scale experiment, numerical simulations, fire protection
Procedia PDF Downloads 855 Hydrodynamics and Hydro-acoustics of Fish Schools: Insights from Computational Models
Authors: Ji Zhou, Jung Hee Seo, Rajat Mittal
Abstract:
Fish move in groups for foraging, reproduction, predator protection, and hydrodynamic efficiency. Schooling's predator protection involves the "many eyes" theory, which increases predator detection probability in a group. Reduced visual signature in a group scales with school size, offering per-capita protection. The ‘confusion effect’ makes it hard for predators to target prey in a group. These benefits, however, all focus on vision-based sensing, overlooking sound-based detection. Fish, including predators, possess sophisticated sensory systems for pressure waves and underwater sound. The lateral line system detects acoustic waves, while otolith organs sense infrasound, and sharks use an auditory system for low-frequency sounds. Among sound generation mechanisms of fish, the mechanism of dipole sound relates to hydrodynamic pressure forces on the body surface of the fish and this pressure would be affected by group swimming. Thus, swimming within a group could affect this hydrodynamic noise signature of fish and possibly serve as an additional protection afforded by schooling, but none of the studies to date have explored this effect. BAUVs with fin-like propulsors could reduce acoustic noise without compromising performance, addressing issues of anthropogenic noise pollution in marine environments. Therefore, in this study, we used our in-house immersed-boundary method flow and acoustic solver, ViCar3D, to simulate fish schools consisting of four swimmers in the classic ‘diamond’ configuration and discussed the feasibility of yielding higher swimming efficiency and controlling far-field sound signature of the school. We examine the effects of the relative phase of fin flapping of the swimmers and the simulation results indicate that the phase of the fin flapping is a dominant factor in both thrust enhancement and the total sound radiated into the far-field by a group of swimmers. For fish in the “diamond” configuration, a suitable combination of the relative phase difference between pairs of leading fish and trailing fish can result in better swimming performance with significantly lower hydroacoustic noise.Keywords: fish schooling, biopropulsion, hydrodynamics, hydroacoustics
Procedia PDF Downloads 5954 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 26953 Insect Cell-Based Models: Asutralian Sheep bBlowfly Lucilia Cuprina Embryo Primary Cell line Establishment and Transfection
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls, and the parasite has developed resistance to nearly all control chemicals used in the past. It is, therefore, critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi, and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: lucilia cuprina, primary cell line establishment, RNA interference, insect cell transfection
Procedia PDF Downloads 7152 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)
Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami
Abstract:
Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives
Procedia PDF Downloads 21951 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 1950 Tribological Behaviour of the Degradation Process of Additive Manufactured Stainless Steel 316L
Authors: Yunhan Zhang, Xiaopeng Li, Zhongxiao Peng
Abstract:
Additive manufacturing (AM) possesses several key characteristics, including high design freedom, energy-efficient manufacturing process, reduced material waste, high resolution of finished products, and excellent performance of finished products. These advantages have garnered widespread attention and fueled rapid development in recent decades. AM has significantly broadened the spectrum of available materials in the manufacturing industry and is gradually replacing some traditionally manufactured parts. Similar to components produced via traditional methods, products manufactured through AM are susceptible to degradation caused by wear during their service life. Given the prevalence of 316L stainless steel (SS) parts and the limited research on the tribological behavior of 316L SS samples or products fabricated using AM technology, this study aims to investigate the degradation process and wear mechanisms of 316L SS disks fabricated using AM technology. The wear mechanisms and tribological performance of these AM-manufactured samples are compared with commercial 316L SS samples made using conventional methods. Additionally, methods to enhance the tribological performance of additive-manufactured SS samples are explored. Four disk samples with a diameter of 75 mm and a thickness of 10 mm are prepared. Two of them (Group A) are prepared from a purchased SS bar using a milling method. The other two disks (Group B), with the same dimensions, are made of Gas Atomized 316L Stainless Steel (size range: 15-45 µm) purchased from Carpenter Additive and produced using Laser Powder Bed Fusion (LPBF). Pin-on-disk tests are conducted on these disks, which have similar surface roughness and hardness levels. Multiple tests are carried out under various operating conditions, including varying loads and/or speeds, and the friction coefficients are measured during these tests. In addition, the evolution of the surface degradation processes is monitored by creating moulds of the wear tracks and quantitatively analyzing the surface morphologies of the mould images. This analysis involves quantifying the depth and width of the wear tracks and analyzing the wear debris generated during the wear processes. The wear mechanisms and wear performance of these two groups of SS samples are compared. The effects of load and speed on the friction coefficient and wear rate are investigated. The ultimate goal is to gain a better understanding of the surface degradation of additive-manufactured SS samples. This knowledge is crucial for enhancing their anti-wear performance and extending their service life.Keywords: degradation process, additive manufacturing, stainless steel, surface features
Procedia PDF Downloads 7649 Process Flows and Risk Analysis for the Global E-SMC
Authors: Taeho Park, Ming Zhou, Sangryul Shim
Abstract:
With the emergence of the global economy, today’s business environment is getting more competitive than ever in the past. And many supply chain (SC) strategies and operations have significantly been altered over the past decade to overcome more complexities and risks imposed onto the global business. First, offshoring and outsourcing are more adopted as operational strategies. Manufacturing continues to move to better locations for enhancing competitiveness. Second, international operations are a challenge to a company’s SC system. Third, the products traded in the SC system are not just physical goods, but also digital goods (e.g., software, e-books, music, video materials). There are three main flows involved in fulfilling the activities in the SC system: physical flow, information flow, and financial flow. An advance of the Internet and electronic communication technologies has enabled companies to perform the flows of SC activities in electronic formats, resulting in the advent of an electronic supply chain management (e-SCM) system. A SC system for digital goods is somewhat different from the supply chain system for physical goods. However, it involves many similar or identical SC activities and flows. For example, like the production of physical goods, many third parties are also involved in producing digital goods for the production of components and even final products. This research aims at identifying process flows of both physical and digital goods in a SC system, and then investigating all risk elements involved in the physical, information, and financial flows during the fulfilment of SC activities. There are many risks inherent in the e-SCM system. Some risks may have severe impact on a company’s business, and some occur frequently but are not detrimental enough to jeopardize a company. Thus, companies should assess the impact and frequency of those risks, and then prioritize them in terms of their severity, frequency, budget, and time in order to be carefully maintained. We found risks involved in the global trading of physical and digital goods in four different categories: environmental risk, strategic risk, technological risk, and operational risk. And then the significance of those risks was investigated through a survey. The survey asked companies about the frequency and severity of the identified risks. They were also asked whether they had faced those risks in the past. Since the characteristics and supply chain flows of digital goods are varying industry by industry and country by country, it is more meaningful and useful to analyze risks by industry and country. To this end, more data in each industry sector and country should be collected, which could be accomplished in the future research.Keywords: digital goods, e-SCM, risk analysis, supply chain flows
Procedia PDF Downloads 41948 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss
Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin
Abstract:
Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links
Procedia PDF Downloads 12647 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model
Authors: Kang Lin Peng
Abstract:
Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature
Procedia PDF Downloads 12546 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades
Authors: Chi Zhang, Hua-Peng Chen
Abstract:
With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.
Procedia PDF Downloads 41245 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance
Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder
Abstract:
Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes
Procedia PDF Downloads 42744 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 23243 Industry Symbiosis and Waste Glass Upgrading: A Feasibility Study in Liverpool Towards Circular Economy
Authors: Han-Mei Chen, Rongxin Zhou, Taige Wang
Abstract:
Glass is widely used in everyday life, from glass bottles for beverages to architectural glass for various forms of glazing. Although the mainstream of used glass is recycled in the UK, the single-use and then recycling procedure results in a lot of waste as it incorporates intact glass with smashing, re-melting, and remanufacturing. These processes bring massive energy consumption with a huge loss of high embodied energy and economic value, compared to re-use, which’s towards a ‘zero carbon’ target. As a tourism city, Liverpool has more glass bottle consumption than most less leisure-focused cities. It’s therefore vital for Liverpool to find an upgrading approach for the single-use glass bottles with low carbon output. This project aims to assess the feasibility of industrial symbiosis and upgrading the framework of glass and to investigate the ways of achieving them. It is significant to Liverpool’s future industrial strategy since it provides an opportunity to target economic recovery for post-COVID by industry symbiosis and up-grading waste management in Liverpool to respond to the climate emergency. In addition, it will influence the local government policy for glass bottle reuse and recycling in North West England and as a good practice to be further recommended to other areas of the UK. First, a critical literature review of glass waste strategies has been conducted in the UK and worldwide industrial symbiosis practices. Second, mapping, data collection, and analysis have shown the current life cycle chain and the strong links of glass reuse and upgrading potentials via site visits to 16 local waste recycling centres. The results of this research have demonstrated the understanding of the influence of key factors on the development of a circular industrial symbiosis business model for beverage glass bottles. The current waste management procedures of the glass bottle industry, its business model, supply chain, and material flow have been reviewed. The various potential opportunities for glass bottle up-valuing have been investigated towards an industrial symbiosis in Liverpool. Finally, an up-valuing business model has been developed for an industrial symbiosis framework of glass in Liverpool. For glass bottles, there are two possibilities 1) focus on upgrading processes towards re-use rather than single-use and recycling and 2) focus on ‘smart’ re-use and recycling, leading to optimised values in other sectors to create a wider industry symbiosis for a multi-level and circular economy.Keywords: glass bottles, industry symbiosis, smart re-use, waste upgrading
Procedia PDF Downloads 10442 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations
Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo
Abstract:
Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation
Procedia PDF Downloads 28341 Modification of Unsaturated Fatty Acids Derived from Tall Oil Using Micro/Mesoporous Materials Based on H-ZSM-22 Zeolite
Authors: Xinyu Wei, Mingming Peng, Kenji Kamiya, Eika Qian
Abstract:
Iso-stearic acid as a saturated fatty acid with a branched chain shows a low pour point, high oxidative stability and great biodegradability. The industrial production of iso-stearic acid involves first isomerizing unsaturated fatty acids into branched-chain unsaturated fatty acids (BUFAs), followed by hydrogenating the branched-chain unsaturated fatty acids to obtain iso-stearic acid. However, the production yield of iso-stearic acid is reportedly less than 30%. In recent decades, extensive research has been conducted on branched fatty acids. Most research has replaced acidic clays with zeolites due to their high selectivity, good thermal stability, and renewability. It was reported that isomerization of unsaturated fatty acid occurred mainly inside the zeolite channel. In contrast, the production of by-products like dimer acid mainly occurs at acid sites outside the surface of zeolite. Further, the deactivation of catalysts is attributed to the pore blockage of zeolite. In the present study, micro/mesoporous ZSM-22 zeolites were developed. It is clear that the synthesis of a micro/mesoporous ZSM-22 zeolite is regarded as the ideal strategy owing to its ability to minimize coke formation. Different mesoporosities micro/mesoporous H-ZSM-22 zeolites were prepared through recrystallization of ZSM-22 using sodium hydroxide solution (0.2-1M) with cetyltrimethylammonium bromide template (CTAB). The structure, morphology, porosity, acidity, and isomerization performance of the prepared catalysts were characterized and evaluated. The dissolution and recrystallization process of the H-ZSM-22 microporous zeolite led to the formation of approximately 4 nm-sized mesoporous channels on the outer surface of the microporous zeolite, resulting in a micro/mesoporous material. This process increased the weak Brønsted acid sites at the pore mouth while reducing the total number of acid sites in ZSM-22. Finally, an activity test was conducted using oleic acid as a model compound in a fixed-bed reactor. The activity test results revealed that micro/mesoporous H-ZSM-22 zeolites exhibited a high isomerization activity, reaching >70% selectivity and >50% yield of BUFAs. Furthermore, the yield of oligomers was limited to less than 20%. This demonstrates that the presence of mesopores in ZSM-22 enhances contact between the feedstock and the active sites within the catalyst, thereby increasing catalyst activity. Additionally, a portion of the dissolved and recrystallized silica adhered to the catalyst's surface, covering the surface-active sites, which reduced the formation of oligomers. This study offers distinct insights into the production of iso-stearic acid using a fixed-bed reactor, paving the way for future research in this area.Keywords: Iso-stearic acid, oleic acid, skeletal isomerization, micro/mesoporous, ZSM-22
Procedia PDF Downloads 2140 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 7739 A Description Analysis of Mortality Rate of Human Infection with Avian Influenza A(H7N9) Virus in China
Authors: Lei Zhou, Chao Li, Ruiqi Ren, Dan Li, Yali Wang, Daxin Ni, Zijian Feng, Qun Li
Abstract:
Background: Since the first human infection with avian influenza A(H7N9) case was reported in China on 31 March 2013, five epidemics have been observed in China through February 2013 and September 2017. Though the overall mortality rate of H7N9 has remained as high as around 40% throughout the five epidemics, the specific mortality rate in Mainland China varied by provinces. We conducted a descriptive analysis of mortality rates of H7N9 cases to explore the various severity features of the disease and then to provide clues of further analyses of potential factors associated with the severity of the disease. Methods: The data for analysis originated from the National Notifiable Infectious Disease Report and Surveillance System (NNIDRSS). The surveillance system and identification procedure for H7N9 infection have not changed in China since 2013. The definition of a confirmed H7N9 case is as same as previous reports. Mortality rates of H7N9 cases are described and compared by time and location of reporting, age and sex, and genetic features of H7N9 virus strains. Results: The overall mortality rate, the male and female specific overall rates of H7N9 is 39.6% (608/1533), 40.3% (432/1072) and 38.2% (176/461), respectively. There was no significant difference between the mortality rates of male and female. The age-specific mortality rates are significantly varied by age groups (χ²=38.16, p < 0.001). The mortality of H7N9 cases in the age group between 20 and 60 (33.17%) and age group of over 60 (51.16%) is much higher than that in the age group of under 20 (5.00%). Considering the time of reporting, the mortality rates of cases which were reported in the first (40.57%) and fourth (42.51%) quarters of each year are significantly higher than the mortality of cases which were reported in the second (36.02%) and third (27.27%) quarters (χ²=75.18, p < 0.001). The geographic specific mortality rates vary too. The mortality rates of H7N9 cases reported from the Northeast China (66.67%) and Westeast China (56.52%) are significantly higher than that of H7N9 cases reported from the remained area of mainland China. The mortality rate of H7N9 cases reported from the Central China is the lowest (34.38%). The mortality rates of H7N9 cases reported from rural (37.76%) and urban (38.96%) areas are similar. The mortality rate of H7N9 cases infected with the highly pathogenic avian influenza A(H7N9) virus (48.15%) is higher than the rate of H7N9 cases infected with the low pathogenic avian influenza A(H7N9) virus (37.57%), but the difference is not statistically significant. Preliminary analyses showed that age and some clinical complications such as respiratory failure, heart failure, and septic shock could be potential risk factors associated with the death of H7N9 cases. Conclusions: The mortality rates of H7N9 cases varied by age, sex, time of reporting and geographical location in mainland China. Further in-depth analyses and field investigations of the factors associated with the severity of H7N9 cases need to be considered.Keywords: H7N9 virus, Avian Influenza, mortality, China
Procedia PDF Downloads 24238 Barrier Analysis of Sustainable Development of Small Towns: A Perspective of Southwest China
Authors: Yitian Ren, Liyin Shen, Tao Zhou, Xiao Li
Abstract:
The past urbanization process in China has brought out series of problems, the Chinese government has then positioned small towns in essential roles for implementing the strategy 'The National New-type Urbanization Plan (2014-2020)'. As the connector and transfer station of cities and countryside, small towns are important force to narrow the gap between urban and rural area, and to achieve the mission of new-type urbanization in China. The sustainable development of small towns plays crucial role because cities are not capable enough to absorb the surplus rural population. Nevertheless, there are various types of barriers hindering the sustainable development of small towns, which led to the limited development of small towns and has presented a bottleneck in Chinese urbanization process. Therefore, this paper makes deep understanding of these barriers, thus effective actions can be taken to address them. And this paper chooses the perspective of Southwest China (refers to Sichuan province, Yunnan province, Guizhou province, Chongqing Municipality City and Tibet Autonomous Region), cause the urbanization rate in Southwest China is far behind the average urbanization level of the nation and the number of small towns accounts for a great proportion in mainland China, also the characteristics of small towns in Southwest China are distinct. This paper investigates the barriers of sustainable development of small towns which located in Southwest China by using the content analysis method, combing with the field work and interviews in sample small towns, then identified and concludes 18 barriers into four dimensions, namely, institutional barriers, economic barriers, social barriers and ecological barriers. Based on the research above, questionnaire survey and data analysis are implemented, thus the key barriers hinder the sustainable development of small towns in Southwest China are identified by using fuzzy set theory, those barriers are, lack of independent financial power, lack of construction land index, financial channels limitation, single industrial structure, topography variety and complexity, which mainly belongs to institutional barriers and economic barriers. In conclusion part, policy suggestions are come up with to improve the politic and institutional environment of small town development, also the market mechanism are supposed to be introduced to the development process of small towns, which can effectively overcome the economic barriers, promote the sustainable development of small towns, accelerate the in-situ urbanization by absorbing peasants in nearby villages, and achieve the mission of new-type urbanization in China from the perspective of people-oriented.Keywords: barrier analysis, sustainable development, small town, Southwest China
Procedia PDF Downloads 34237 Promoting 'One Health' Surveillance and Response Approach Implementation Capabilities against Emerging Threats and Epidemics Crisis Impact in African Countries
Authors: Ernest Tambo, Ghislaine Madjou, Jeanne Y. Ngogang, Shenglan Tang, Zhou XiaoNong
Abstract:
Implementing national to community-based 'One Health' surveillance approach for human, animal and environmental consequences mitigation offers great opportunities and value-added in sustainable development and wellbeing. 'One Health' surveillance approach global partnerships, policy commitment and financial investment are much needed in addressing the evolving threats and epidemics crises mitigation in African countries. The paper provides insights onto how China-Africa health development cooperation in promoting “One Health” surveillance approach in response advocacy and mitigation. China-Africa health development initiatives provide new prospects in guiding and moving forward appropriate and evidence-based advocacy and mitigation management approaches and strategies in attaining Universal Health Coverage (UHC) and Sustainable Development Goals (SDGs). Early and continuous quality and timely surveillance data collection and coordinated information sharing practices in malaria and other diseases are demonstrated in Comoros, Zanzibar, Ghana and Cameroon. Improvements of variety of access to contextual sources and network of data sharing platforms are needed in guiding evidence-based and tailored detection and response to unusual hazardous events. Moreover, understanding threats and diseases trends, frontline or point of care response delivery is crucial to promote integrated and sustainable targeted local, national “One Health” surveillance and response approach needs implementation. Importantly, operational guidelines are vital in increasing coherent financing and national workforce capacity development mechanisms. Strengthening participatory partnerships, collaboration and monitoring strategies in achieving global health agenda effectiveness in Africa. At the same enhancing surveillance data information streams reporting and dissemination usefulness in informing policies decisions, health systems programming and financial mobilization and prioritized allocation pre, during and post threats and epidemics crises programs strengths and weaknesses. Thus, capitalizing on “One Health” surveillance and response approach advocacy and mitigation implementation is timely in consolidating Africa Union 2063 agenda and Africa renaissance capabilities and expectations.Keywords: Africa, one health approach, surveillance, response
Procedia PDF Downloads 42036 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 10035 Effect on the Integrity of the DN300 Pipe and Valves in the Cooling Water System Imposed by the Pipes and Ventilation Pipes above in an Earthquake Situation
Authors: Liang Zhang, Gang Xu, Yue Wang, Chen Li, Shao Chong Zhou
Abstract:
Presently, more and more nuclear power plants are facing the issue of life extension. When a nuclear power plant applies for an extension of life, its condition needs to meet the current design standards, which is not fine for all old reactors, typically for seismic design. Seismic-grade equipment in nuclear power plants are now generally placed separately from the non-seismic-grade equipment, but it was not strictly required before. Therefore, it is very important to study whether non-seismic-grade equipment will affect the seismic-grade equipment when dropped down in an earthquake situation, which is related to the safety of nuclear power plants and future life extension applications. This research was based on the cooling water system with the seismic and non-seismic grade equipment installed together, as an example to study whether the non-seismic-grade equipment such as DN50 fire pipes and ventilation pipes arranged above will damage the DN300 pipes and valves arranged below when earthquakes occur. In the study, the simulation was carried out by ANSYS / LY-DYNA, and Johnson-Cook was used as the material model and failure model. For the experiments, the relative positions of objects in the room were restored by 1: 1. In the experiment, the pipes and valves were filled with water with a pressure of 0.785 MPa. The pressure-holding performance of the pipe was used as a criterion for damage. In addition to the pressure-holding performance, the opening torque was considered as well for the valves. The research results show that when the 10-meter-long DN50 pipe was dropped from the position of 8 meters height and the 8-meter-long air pipe dropped from a position of 3.6 meters height, they do not affect the integrity of DN300 pipe below. There is no failure phenomenon in the simulation as well. After the experiment, the pressure drop in two hours for the pipe is less than 0.1%. The main body of the valve does not fail either. The opening torque change after the experiment is less than 0.5%, but the handwheel of the valve may break, which affects the opening actions. In summary, impacts of the upper pipes and ventilation pipes dropdown on the integrity of the DN300 pipes and valves below in a cooling water system of a typical second-generation nuclear power plant under an earthquake was studied. As a result, the functionality of the DN300 pipeline and the valves themselves are not significantly affected, but the handwheel of the valve or similar articles can probably be broken and need to take care.Keywords: cooling water system, earthquake, integrity, pipe and valve
Procedia PDF Downloads 11134 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe
Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou
Abstract:
In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine
Procedia PDF Downloads 5933 Multi-Scale Spatial Difference Analysis Based on Nighttime Lighting Data
Authors: Qinke Sun, Liang Zhou
Abstract:
The ‘Dragon-Elephant Debate’ between China and India is an important manifestation of global multipolarity in the 21st century. The two rising powers have carried out economic reforms one after another in the interval of more than ten years, becoming the fastest growing developing country and emerging economy in the world. At the same time, the development differences between China and India have gradually attracted wide attention of scholars. Based on the continuous annual night light data (DMSP-OLS) from 1992 to 2012, this paper systematically compares and analyses the regional development differences between China and India by Gini coefficient, coefficient of variation, comprehensive night light index (CNLI) and hot spot analysis. The results show that: (1) China's overall expansion from 1992 to 2012 is 1.84 times that of India, in which China's change is 2.6 times and India's change is 2 times. The percentage of lights in unlighted areas in China dropped from 92% to 82%, while that in India from 71% to 50%. (2) China's new growth-oriented cities appear in Hohhot, Inner Mongolia, Ordos, and Urumqi in the west, and the declining cities are concentrated in Liaoning Province and Jilin Province in the northeast; India's new growth-oriented cities are concentrated in Chhattisgarh in the north, while the declining areas are distributed in Uttar Pradesh. (3) China's differences on different scales are lower than India's, and regional inequality of development is gradually narrowing. Gini coefficients at the regional and provincial levels have decreased from 0.29, 0.44 to 0.24 and 0.38, respectively, while regional inequality in India has slowly improved and regional differences are gradually widening, with Gini coefficients rising from 0.28 to 0.32. The provincial Gini coefficient decreased slightly from 0.64 to 0.63. (4) The spatial pattern of China's regional development is mainly east-west difference, which shows the difference between coastal and inland areas; while the spatial pattern of India's regional development is mainly north-south difference, but because the southern states are sea-dependent, it also reflects the coastal inland difference to a certain extent. (5) Beijing and Shanghai present a multi-core outward expansion model, with an average annual CNLI higher than 0.01, while New Delhi and Mumbai present the main core enhancement expansion model, with an average annual CNLI lower than 0.01, of which the average annual CNLI in Shanghai is about five times that in Mumbai.Keywords: spatial pattern, spatial difference, DMSP-OLS, China, India
Procedia PDF Downloads 154