Search results for: Xiaoyan Zhou
61 Association of the Time in Targeted Blood Glucose Range of 3.9–10 Mmol/L with the Mortality of Critically Ill Patients with or without Diabetes
Authors: Guo Yu, Haoming Ma, Peiru Zhou
Abstract:
BACKGROUND: In addition to hyperglycemia, hypoglycemia, and glycemic variability, a decrease in the time in the targeted blood glucose range (TIR) may be associated with an increased risk of death for critically ill patients. However, the relationship between the TIR and mortality may be influenced by the presence of diabetes and glycemic variability. METHODS: A total of 998 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The TIR is defined as the percentage of time spent in the target blood glucose range of 3.9–10.0 mmol/L within 24 hours. The relationship between TIR and in-hospital in diabetic and non-diabetic patients was analyzed. The effect of glycemic variability was also analyzed. RESULTS: The binary logistic regression model showed that there was a significant association between the TIR as a continuous variable and the in-hospital death of severely ill non-diabetic patients (OR=0.991, P=0.015). As a classification variable, TIR≥70% was significantly associated with in-hospital death (OR=0.581, P=0.003). Specifically, TIR≥70% was a protective factor for the in-hospital death of severely ill non-diabetic patients. The TIR of severely ill diabetic patients was not significantly associated with in-hospital death; however, glycemic variability was significantly and independently associated with in-hospital death (OR=1.042, P=0.027). Binary logistic regression analysis of comprehensive indices showed that for non-diabetic patients, the C3 index (low TIR & high CV) was a risk factor for increased mortality (OR=1.642, P<0.001). In addition, for diabetic patients, the C3 index was an independent risk factor for death (OR=1.994, P=0.008), and the C4 index (low TIR & low CV) was independently associated with increased survival. CONCLUSIONS: The TIR of non-diabetic patients during ICU hospitalization was associated with in-hospital death even after adjusting for disease severity and glycemic variability. There was no significant association between the TIR and mortality of diabetic patients. However, for both diabetic and non-diabetic critically ill patients, the combined effect of high TIR and low CV was significantly associated with ICU mortality. Diabetic patients seem to have higher blood glucose fluctuations and can tolerate a large TIR range. Both diabetic and non-diabetic critically ill patients should maintain blood glucose levels within the target range to reduce mortality.Keywords: severe disease, diabetes, blood glucose control, time in targeted blood glucose range, glycemic variability, mortality
Procedia PDF Downloads 22260 Strong Ground Motion Characteristics Revealed by Accelerograms in Ms8.0 Wenchuan Earthquake
Authors: Jie Su, Zhenghua Zhou, Yushi Wang, Yongyi Li
Abstract:
The ground motion characteristics, which are given by the analysis of acceleration records, underlie the formulation and revision of the seismic design code of structural engineering. China Digital Strong Motion Network had recorded a lot of accelerograms of main shock from 478 permanent seismic stations, during the Ms8.0 Wenchuan earthquake on 12th May, 2008. These accelerograms provided a large number of essential data for the analysis of ground motion characteristics of the event. The spatial distribution characteristics, rupture directivity effect, hanging-wall and footwall effect had been studied based on these acceleration records. The results showed that the contours of horizontal peak ground acceleration and peak velocity were approximately parallel to the seismogenic fault which demonstrated that the distribution of the ground motion intensity was obviously controlled by the spatial extension direction of the seismogenic fault. Compared with the peak ground acceleration (PGA) recorded on the sites away from which the front of the fault rupture propagates, the PGA recorded on the sites toward which the front of the fault rupture propagates had larger amplitude and shorter duration, which indicated a significant rupture directivity effect. With the similar fault distance, the PGA of the hanging-wall is apparently greater than that of the foot-wall, while the peak velocity fails to observe this rule. Taking account of the seismic intensity distribution of Wenchuan Ms8.0 earthquake, the shape of strong ground motion contours was significantly affected by the directional effect in the regions with Chinese seismic intensity level VI ~ VIII. However, in the regions whose Chinese seismic intensity level are equal or greater than VIII, the mutual positional relationship between the strong ground motion contours and the surface outcrop trace of the fault was evidently influenced by the hanging-wall and foot-wall effect.Keywords: hanging-wall and foot-wall effect, peak ground acceleration, rupture directivity effect, strong ground motion
Procedia PDF Downloads 35059 FEM Simulation of Tool Wear and Edge Radius Effects on Residual Stress in High Speed Machining of Inconel718
Authors: Yang Liu, Mathias Agmell, Aylin Ahadi, Jan-Eric Stahl, Jinming Zhou
Abstract:
Tool wear and tool geometry have significant effects on the residual stresses in the component produced by high-speed machining. In this paper, Coupled Eulerian and Lagrangian (CEL) model is adopted to investigate the residual stress in high-speed machining of Inconel718 with a CBN170 cutting tool. The result shows that the mesh with the smallest size of 5 um yields cutting forces and chip morphology in close agreement with the experimental data. The analysis of thermal loading and mechanical loading are performed to study the effect of segmented chip morphology on the machined surface topography and residual stress distribution. The effects of cutting edge radius and flank wear on residual stresses formation and distribution on the workpiece were also investigated. It is found that the temperature within 100um depth of the machined surface increases drastically due to the more friction heat generation with the contact area of tool and workpiece increasing when a larger edge radius and flank wear are used. With the depth further increasing, the temperature drops rapidly for all cases due to the low conductivity of Inconel718. Consequently, higher and deeper tensile residual stress is generated on the superficial. Furthermore, an increased depth of plastic deformation and compressive residual stress is noticed in the subsurface, which is attributed to the reduction of the yield strength under the thermal effect. Besides, the ploughing effect produced by a larger tool edge radius contributes more than flank wear. The magnitude variation of the compressive residual stress caused by various edge radius and flank wear have a totally opposite trend, which depends on the magnitude of the ploughing and friction pressure acting on the machined surface.Keywords: Coupled Eulerian Lagrangian, segmented chip, residual stress, tool wear, edge radius, Inconel718
Procedia PDF Downloads 14658 Study on Safety Management of Deep Foundation Pit Construction Site Based on Building Information Modeling
Authors: Xuewei Li, Jingfeng Yuan, Jianliang Zhou
Abstract:
The 21st century has been called the century of human exploitation of underground space. Due to the characteristics of large quantity, tight schedule, low safety reserve and high uncertainty of deep foundation pit engineering, accidents frequently occur in deep foundation pit engineering, causing huge economic losses and casualties. With the successful application of information technology in the construction industry, building information modeling has become a research hotspot in the field of architectural engineering. Therefore, the application of building information modeling (BIM) and other information communication technologies (ICTs) in construction safety management is of great significance to improve the level of safety management. This research summed up the mechanism of the deep foundation pit engineering accident through the fault tree analysis to find the control factors of deep foundation pit engineering safety management, the deficiency existing in the traditional deep foundation pit construction site safety management. According to the accident cause mechanism and the specific process of deep foundation pit construction, the hazard information of deep foundation pit engineering construction site was identified, and the hazard list was obtained, including early warning information. After that, the system framework was constructed by analyzing the early warning information demand and early warning function demand of the safety management system of deep foundation pit. Finally, the safety management system of deep foundation pit construction site based on BIM through combing the database and Web-BIM technology was developed, so as to realize the three functions of real-time positioning of construction site personnel, automatic warning of entering a dangerous area, real-time monitoring of deep foundation pit structure deformation and automatic warning. This study can initially improve the current situation of safety management in the construction site of deep foundation pit. Additionally, the active control before the occurrence of deep foundation pit accidents and the whole process dynamic control in the construction process can be realized so as to prevent and control the occurrence of safety accidents in the construction of deep foundation pit engineering.Keywords: Web-BIM, safety management, deep foundation pit, construction
Procedia PDF Downloads 15357 Sedimentological and Geochemical Characteristics of Aeolian Sediments and Their Implication for Sand Origin in the Yarlung Zangbo River Valley, Southern Qinghai-Tibetan Plateau
Authors: Na Zhou, Chun-Lai Zhang, Qing Li, Bingqi Zhu, Xun-Ming Wang
Abstract:
The understanding of the dynamics of aeolian sand in the Yarlung Zangbo River Valley (YLZBV), southern Qinghai-Tibetan Plateau, including its origins, transportation,and deposition, remains preliminary. In this study, we investigated the extensive origin of aeolian sediments in the YLZBV by analyzing the distribution and composition of sediment’s grain size and geochemical composition in dune sediments collected from the wide river terraces. The major purpose is to characterize the sedimentological and geochemical compositions of these aeolian sediments, trace back to their sources, and understand their influencing factors. As a result, the grain size and geochemistry variations, which showed a significant correlation between grain sizes distribution and element abundances, give a strong evidence that the important part of the aeolian sediments in the downstream areas was firstly derived from the upper reaches by intense fluvial processes. However, the sediments experienced significant mixing process with local inputs and reconstructed by regional wind transportation. The diverse compositions and tight associations in the major and trace element geochemistry between the up- and down-stream aeolian sediments and the local detrital rocks, which were collected from the surrounding mountains, suggest that the upstream aeolian sediments had originated from the various close-range rock types, and experienced intensive mixing processes via aeolian- fluvial dynamics. Sand mass transported by water and wind was roughly estimated to qualify the interplay between the aeolian and fluvial processes controlling the sediment transport, yield, and ultimately shaping the aeolian landforms in the mainstream of the YLZBV.Keywords: grain size distribution, geochemistry, wind and water load, sand source, Yarlung Zangbo River Valley
Procedia PDF Downloads 9756 Seismic Behavior and Loss Assessment of High–Rise Buildings with Light Gauge Steel–Concrete Hybrid Structure
Authors: Bing Lu, Shuang Li, Hongyuan Zhou
Abstract:
The steel–concrete hybrid structure has been extensively employed in high–rise buildings and super high–rise buildings. The light gauge steel–concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a new–type steel–concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high–rise buildings with three different concrete hybrid structures were investigated through finite element software, respectively. The three concrete hybrid structures are reinforced concrete column–steel beam (RC‒S) hybrid structure, concrete–filled steel tube column–steel beam (CFST‒S) hybrid structure, and tubed concrete column–steel beam (TC‒S) hybrid structure. The nonlinear time-history analysis of three high–rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high–rise buildings were superior. Under extremely rare earthquakes, the maximum inter–storey drifts of three high–rise buildings are significantly lower than 1/50. The inter–storey drift and floor acceleration of high–rise building with CFST‒S hybrid structure were bigger than those of high–rise buildings with RC‒S hybrid structure, and smaller than those of high–rise building with TC‒S hybrid structure. Then, based on the time–history analysis results, the post-earthquake repair cost ratio and repair time of three high–rise buildings were predicted through an economic performance analysis method proposed in FEMA‒P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC‒S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.Keywords: seismic behavior, loss assessment, light gauge steel–concrete hybrid structure, high–rise building, time–history analysis
Procedia PDF Downloads 18555 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 8954 Eco-Design of Construction Industrial Park in China with Selection of Candidate Tenants
Authors: Yang Zhou, Kaijian Li, Guiwen Liu
Abstract:
Offsite construction is an innovative alternative to conventional site-based construction, with wide-ranging benefits. It requires building components, elements or modules were prefabricated and pre-assembly before installed into their final locations. To improve efficiency and achieve synergies, in recent years, construction companies were clustered into construction industrial parks (CIPs) in China. A CIP is a community of construction manufacturing and service businesses located together on a common property. Companies involved in industrial clusters can obtain environment and economic benefits by sharing resources and information in a given region. Therefore, the concept of industrial symbiosis (IS) can be applied to the traditional CIP to achieve sustainable industrial development or redevelopment through the implementation of eco-industrial parks (EIP). However, before designing a symbiosis network between companies in a CIP, candidate support tenants need to be selected to complement the existing construction companies. In this study, an access indicator system and a linear programming model are established to select candidate tenants in a CIP while satisfying the degree of connectivity among the enterprises in the CIP, minimizing the environmental impact, and maximizing the annualized profit of the CIP. The access indicator system comprises three primary indicators and fifteen secondary indicators, is proposed from the perspective of park-based level. The fifteen indicators are classified as three primary indicators including industrial symbiosis, environment performance and economic benefit, according to the three dimensions of sustainability (environment, economic and social dimensions) and the three R's of the environment (reduce, reuse and recycle). The linear programming model is a method to assess the satisfactoriness of all the indicators and to make an optimal multi-objective selection among candidate tenants. This method provides a practical tool for planners of a CIP in evaluating which among the candidate tenants would best complement existing anchor construction tenants. The reasonability and validity of the indicator system and the method is worth further study in the future.Keywords: construction industrial park, China, industrial symbiosis, offsite construction, selection of support tenants
Procedia PDF Downloads 27453 Ultrasensitive Detection and Discrimination of Cancer-Related Single Nucleotide Polymorphisms Using Poly-Enzyme Polymer Bead Amplification
Authors: Lorico D. S. Lapitan Jr., Yihan Xu, Yuan Guo, Dejian Zhou
Abstract:
The ability of ultrasensitive detection of specific genes and discrimination of single nucleotide polymorphisms is important for clinical diagnosis and biomedical research. Herein, we report the development of a new ultrasensitive approach for label-free DNA detection using magnetic nanoparticle (MNP) assisted rapid target capture/separation in combination with signal amplification using poly-enzyme tagged polymer nanobead. The sensor uses an MNP linked capture DNA and a biotin modified signal DNA to sandwich bind the target followed by ligation to provide high single-nucleotide polymorphism discrimination. Only the presence of a perfect match target DNA yields a covalent linkage between the capture and signal DNAs for subsequent conjugation of a neutravidin-modified horseradish peroxidase (HRP) enzyme through the strong biotin-nuetravidin interaction. This converts each captured DNA target into an HRP which can convert millions of copies of a non-fluorescent substrate (amplex red) to a highly fluorescent product (resorufin), for great signal amplification. The use of polymer nanobead each tagged with thousands of copies of HRPs as the signal amplifier greatly improves the signal amplification power, leading to greatly improved sensitivity. We show our biosensing approach can specifically detect an unlabeled DNA target down to 10 aM with a wide dynamic range of 5 orders of magnitude (from 0.001 fM to 100.0 fM). Furthermore, our approach has a high discrimination between a perfectly matched gene and its cancer-related single-base mismatch targets (SNPs): It can positively detect the perfect match DNA target even in the presence of 100 fold excess of co-existing SNPs. This sensing approach also works robustly in clinical relevant media (e.g. 10% human serum) and gives almost the same SNP discrimination ratio as that in clean buffers. Therefore, this ultrasensitive SNP biosensor appears to be well-suited for potential diagnostic applications of genetic diseases.Keywords: DNA detection, polymer beads, signal amplification, single nucleotide polymorphisms
Procedia PDF Downloads 24952 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains
Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou
Abstract:
Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing
Procedia PDF Downloads 27051 The Effects of Green Manure Returning on Properties and Fungal Communities in Vanadium/Titanium Magnet Tailings
Authors: Hai-Hong Gu, Yan-Jun Ai, Zheng Zhou
Abstract:
Vanadium and titanium are rare metals with superior properties and are important resources in aerospace, aviation, and military. The vanadium/titanium magnetite are mostly ultra-lean ores, and a large number of tailings has been produced in the exploitation process. The tailings are characterized by loose structure, poor nutrient, complex composition and high trace metal contents. Returning green manure has been shown to not only increase plant biomass and soil nutrients but also change the bioavailability of trace metals and the microbial community structure. Fungi play an important role in decomposing organic matter and increasing soil fertility, and the application of organic matter also affects the community structure of fungi. The effects of green manure plants, alfalfa (Medicago sativa L.), returned to the tailings in situ on community structure of fungi, nutrients and bioavailability of trace metals in vanadium/titanium magnetite tailings were investigated in a pot experiment. The results showed that the fungal community diversity and richness were increase after alfalfa green manure returned in situ. The dominant phyla of the fungal community were Ascomycota, Basidiomycota and Ciliophora, especially, the phyla Ciliophora was rare in ordinary soil, but had been found to be the dominant phyla in tailings. Meanwhile, the nutrient properties and various trace metals may shape the microbial communities by affecting the abundance of fungi. It was found that the plant growth was stimulated and the available N and organic C were significantly improved in the vanadium/titanium magnetite tailing with the long-term returning of alfalfa green manure. Moreover, the DTPA-TEA extractable Cd and Zn concentrations in the vanadium/titanium magnetite tailing were reduced by 7.72%~23.8% and 8.02%~24.4%, respectively, compared with those in the non-returning treatment. The above results suggest that the returning of alfalfa green manure could be a potential approach to improve fungal community structure and restore mine tailing ecosystem.Keywords: fungal community, green manure returning, vanadium/titanium magnet tailings, trace metals
Procedia PDF Downloads 7050 Measures of Phylogenetic Support for Phylogenomic and the Whole Genomes of Two Lungfish Restate Lungfish and Origin of Land Vertebrates
Authors: Yunfeng Shan, Xiaoliang Wang, Youjun Zhou
Abstract:
Whole-genome data from two lungfish species, along with other species, present a valuable opportunity to reassess the longstanding debate regarding the evolutionary relationships among tetrapods, lungfishes, and coelacanths. However, the use of bootstrap support has become outdated for large-scale phylogenomic data. Without robust phylogenetic support, the phylogenetic trees become meaningless. Therefore, it is necessary to re-evaluate the phylogenies of tetrapods, lungfishes, and coelacanths using novel measures of phylogenetic support specifically designed for phylogenomic data, as the previous phylogenies were based on 100% bootstrap support. Our findings consistently provide strong evidence favoring lungfish as the closest living relative of tetrapods. This conclusion is based on high gene support confidence with confidence intervals exceeding 95%, high internode certainty, and high gene concordance factor. The evidence stems from two datasets containing recently deciphered whole genomes of two lungfish species, as well as five previous datasets derived from lungfish transcriptomes. These results yield fresh insights into the three hypotheses regarding the phylogenies of tetrapods, lungfishes, and coelacanths. Importantly, these hypotheses are not mere conjectures but are substantiated by a significant number of genes. Analyzing real biological data further demonstrates that the inclusion of additional taxa diminishes the number of orthologues and leads to more diverse tree topologies. Consequently, gene trees and species trees may not be identical even when whole-genome sequencing data is utilized. However, it is worth noting that many gene trees can accurately reflect the species tree if an appropriate number of taxa, typically ranging from six to ten, are sampled. Therefore, it is crucial to carefully select the number of taxa and an appropriate outgroup while excluding fast-evolving taxa as outgroups to mitigate the adverse effects of long-branch attraction (LBA) and achieve an accurate reconstruction of the species tree. This is particularly important as more whole-genome sequencing data becomes available.Keywords: gene support confidence (GSC), origin of land vertebrates, coelacanth, two whole genomes of lungfishes, confidence intervals
Procedia PDF Downloads 8749 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 30948 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission
Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao
Abstract:
Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid
Procedia PDF Downloads 30847 CFD-DEM Modelling of Liquid Fluidizations of Ellipsoidal Particles
Authors: Esmaeil Abbaszadeh Molaei, Zongyan Zhou, Aibing Yu
Abstract:
The applications of liquid fluidizations have been increased in many parts of industries such as particle classification, backwashing of granular filters, crystal growth, leaching and washing, and bioreactors due to high-efficient liquid–solid contact, favorable mass and heat transfer, high operation flexibilities, and reduced back mixing of phases. In most of these multiphase operations the particles properties, i.e. size, density, and shape, may change during the process because of attrition, coalescence or chemical reactions. Previous studies, either experimentally or numerically, mainly have focused on studies of liquid-solid fluidized beds containing spherical particles; however, the role of particle shape on the hydrodynamics of liquid fluidized beds is still not well-known. A three-dimensional Discrete Element Model (DEM) and Computational Fluid Dynamics (CFD) are coupled to study the influence of particles shape on particles and liquid flow patterns in liquid-solid fluidized beds. In the simulations, ellipsoid particles are used to study the shape factor since they can represent a wide range of particles shape from oblate and sphere to prolate shape particles. Different particle shapes from oblate (disk shape) to elongated particles (rod shape) are selected to investigate the effect of aspect ratio on different flow characteristics such as general particles and liquid flow pattern, pressure drop, and particles orientation. First, the model is verified based on experimental observations, then further detail analyses are made. It was found that spherical particles showed a uniform particle distribution in the bed, which resulted in uniform pressure drop along the bed height. However for particles with aspect ratios less than one (disk-shape), some particles were carried into the freeboard region, and the interface between the bed and freeboard was not easy to be determined. A few particle also intended to leave the bed. On the other hand, prolate particles showed different behaviour in the bed. They caused unstable interface and some flow channeling was observed for low liquid velocities. Because of the non-uniform particles flow pattern for particles with aspect ratios lower (oblate) and more (prolate) than one, the pressure drop distribution in the bed was not observed as uniform as what was found for spherical particles.Keywords: CFD, DEM, ellipsoid, fluidization, multiphase flow, non-spherical, simulation
Procedia PDF Downloads 30946 The Sapir-Whorf Hypothesis and Multicultural Effects on Translators: A Case Study from Chinese Ethnic Minority Literature
Authors: Yuqiao Zhou
Abstract:
The Sapir-Whorf hypothesis (SWH) emphasizes the effect produced by language on people’s minds. According to linguistic relativity, language has evolved over the course of human life on earth, and, in turn, the acquisition of language shapes learners’ thoughts. Despite much attention drawn by SWH, few scholars have attempted to analyse people’s thoughts via their literary works. And yet, the linguistic choices that create a narrative can enable us to examine its writer’s thoughts. Still, less work has been done on the impact of language on the minds of bilingual people. Internationalization has resulted in an increasing number of bilingual and multilingual individuals. In China, where more than one hundred languages are used for communication, most people are bilingual in Mandarin Chinese (the official language of China) and their own dialect. Taking as its corpus the ethnic minority myth of Ge Sa-er Wang by Alai and its English translation by Goldblatt and Lin, this paper aims to analyse the effects of culture on bilingual people’s minds. It will first analyse Alai’s thoughts on using the original version of Ge Sa-er Wang; next, it will examine the thoughts of the two translators by looking at translation choices made in the English version; finally, it will compare the cultural influences evident in the thoughts of Alai, and Goldblatt and Lin. Whereas Alai can speak two Sino-Tibetan languages – Mandarin Chinese and Tibetan – Goldblatt and Lin can speak two languages from different families – Mandarin Chinese (a Sino-Tibetan language) and English (an Indo-European language). The results reveal two systems of thought existing in the translators’ minds; Alai’s text, on the other hand, does not reveal a significant influence from North China, where Mandarin Chinese originated. The findings reveal the inconsistency of a second language’s influence on people’s minds. Notably, they suggest that the more different the two languages are, the greater the influence produced by the second language culture on people’s thoughts. It is hoped that this research will expand the scope of SWH as well as shed light on future translation studies on ethnic minority literature.Keywords: Sapir-Whorf hypothesis, cultural translation, cultural-specific items, Ge Sa-er Wang, ethnic minority literature, Tibet
Procedia PDF Downloads 11245 A Single-Use Endoscopy System for Identification of Abnormalities in the Distal Oesophagus of Individuals with Chronic Reflux
Authors: Nafiseh Mirabdolhosseini, Jerry Zhou, Vincent Ho
Abstract:
The dramatic global rise in acid reflux has also led to oesophageal adenocarcinoma (OAC) becoming the fastest-growing cancer in developed countries. While gastroscopy with biopsy is used to diagnose OAC patients, this labour-intensive and expensive process is not suitable for population screening. This study aims to design, develop, and implement a minimally invasive system to capture optical data of the distal oesophagus for rapid screening of potential abnormalities. To develop the system and understand user requirements, a user-centric approach was employed by utilising co-design strategies. Target users’ segments were identified, and 38 patients and 14 health providers were interviewed. Next, the technical requirements were developed based on consultations with the industry. A minimally invasive optical system was designed and developed considering patient comfort. This system consists of the sensing catheter, controller unit, and analysis program. Its procedure only takes 10 minutes to perform and does not require cleaning afterward since it has a single-use catheter. A prototype system was evaluated for safety and efficacy for both laboratory and clinical performance. This prototype performed successfully when submerged in simulated gastric fluid without showing evidence of erosion after 24 hours. The system effectively recorded a video of the mid-distal oesophagus of a healthy volunteer (34-year-old male). The recorded images were used to develop an automated program to identify abnormalities in the distal oesophagus. Further data from a larger clinical study will be used to train the automated program. This system allows for quick visual assessment of the lower oesophagus in primary care settings and can serve as a screening tool for oesophageal adenocarcinoma. In addition, this system is able to be coupled with 24hr ambulatory pH monitoring to better correlate oesophageal physiological changes with reflux symptoms. It also can provide additional information on lower oesophageal sphincter functions such as opening times and bolus retention.Keywords: endoscopy, MedTech, oesophageal adenocarcinoma, optical system, screening tool
Procedia PDF Downloads 8744 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 11843 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase
Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He
Abstract:
Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification
Procedia PDF Downloads 31142 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 2341 Being an English Language Teaching Assistant in China: Understanding the Identity Evolution of Early-Career English Teacher in Private Tutoring Schools
Authors: Zhou Congling
Abstract:
The integration of private tutoring has emerged as an indispensable facet in the acquisition of language proficiency beyond formal educational settings. Notably, there has been a discernible surge in the demand for private English tutoring, specifically geared towards the preparation for internationally recognized gatekeeping examinations, such as IELTS, TOEFL, GMAT, and GRE. This trajectory has engendered an escalating need for English Language Teaching Assistants (ELTAs) operating within the realm of Private Tutoring Schools (PTSs). The objective of this study is to unravel the intricate process by which these ELTAs formulate their professional identities in the nascent stages of their careers as English educators, as well as to delineate their perceptions regarding their professional trajectories. The construct of language teacher identity is inherently multifaceted, shaped by an amalgamation of individual, societal, and cultural determinants, exerting a profound influence on how language educators navigate their professional responsibilities. This investigation seeks to scrutinize the experiential and influential factors that mold the identities of ELTAs in PTSs, particularly post the culmination of their language-oriented academic programs. Employing a qualitative narrative inquiry approach, this study aims to delve into the nuanced understanding of how ELTAs conceptualize their professional identities and envision their future roles. The research methodology involves purposeful sampling and the conduct of in-depth, semi-structured interviews with ten participants. Data analysis will be conducted utilizing Barkhuizen’s Short Story Analysis, a method designed to explore a three-dimensional narrative space, elucidating the intricate interplay of personal experiences and societal contexts in shaping the identities of ELTAs. The anticipated outcomes of this study are poised to contribute substantively to a holistic comprehension of ELTA identity formation, holding practical implications for diverse stakeholders within the private tutoring sector. This research endeavors to furnish insights into strategies for the retention of ELTAs and the enhancement of overall service quality within PTSs.Keywords: China, English language teacher, narrative inquiry, private tutoring school, teacher identity
Procedia PDF Downloads 5640 Large-Scale Experimental and Numerical Studies on the Temperature Response of Main Cables and Suspenders in Bridge Fires
Authors: Shaokun Ge, Bart Merci, Fubao Zhou, Gao Liu, Ya Ni
Abstract:
This study investigates the thermal response of main cables and suspenders in suspension bridges subjected to vehicle fires, integrating large-scale gasoline pool fire experiments with numerical simulations. Focusing on a suspension bridge in China, the research examines the impact of wind speed, pool size, and lane position on flame dynamics and temperature distribution along the cables. The results indicate that higher wind speeds and larger pool sizes markedly increase the mass burning rate, causing flame deflection and non-uniform temperature distribution along the cables. Under a wind speed of 1.56 m/s, maximum temperatures reached approximately 960 ℃ near the base in emergency lane fires and 909 ℃ at 1.6 m height for slow lane fires, underscoring the heightened thermal risk from emergency lane fires. The study recommends a zoning strategy for cable fire protection, suggesting a 0-12.8 m protection zone with a target temperature of 1000 ℃ and a 12.8-20.8 m zone with a target temperature of 700 ℃, both with a 90-minute fire resistance. This approach, based on precise temperature distribution data from experimental and simulation results, provides a vital reference for the fire protection design of suspension bridge cables. Understanding cable temperature response during vehicle fires is crucial for developing fire protection systems, as it dictates necessary structural protection, fire resistance duration, and maximum temperatures for mitigation. Challenges of controlling environmental wind in large-scale fire tests are also addressed, along with a call for further research on fire behavior mechanisms and structural temperature response in cable-supported bridges under varying wind conditions. Conclusively, the proposed zoning strategy enhances the theoretical understanding of near-field temperature response in bridge fires, contributing significantly to the field by supporting the design of passive fire protection systems for bridge cables, safeguarding their integrity under extreme fire conditions.Keywords: bridge fire, temperature response, large-scale experiment, numerical simulations, fire protection
Procedia PDF Downloads 1039 Hydrodynamics and Hydro-acoustics of Fish Schools: Insights from Computational Models
Authors: Ji Zhou, Jung Hee Seo, Rajat Mittal
Abstract:
Fish move in groups for foraging, reproduction, predator protection, and hydrodynamic efficiency. Schooling's predator protection involves the "many eyes" theory, which increases predator detection probability in a group. Reduced visual signature in a group scales with school size, offering per-capita protection. The ‘confusion effect’ makes it hard for predators to target prey in a group. These benefits, however, all focus on vision-based sensing, overlooking sound-based detection. Fish, including predators, possess sophisticated sensory systems for pressure waves and underwater sound. The lateral line system detects acoustic waves, while otolith organs sense infrasound, and sharks use an auditory system for low-frequency sounds. Among sound generation mechanisms of fish, the mechanism of dipole sound relates to hydrodynamic pressure forces on the body surface of the fish and this pressure would be affected by group swimming. Thus, swimming within a group could affect this hydrodynamic noise signature of fish and possibly serve as an additional protection afforded by schooling, but none of the studies to date have explored this effect. BAUVs with fin-like propulsors could reduce acoustic noise without compromising performance, addressing issues of anthropogenic noise pollution in marine environments. Therefore, in this study, we used our in-house immersed-boundary method flow and acoustic solver, ViCar3D, to simulate fish schools consisting of four swimmers in the classic ‘diamond’ configuration and discussed the feasibility of yielding higher swimming efficiency and controlling far-field sound signature of the school. We examine the effects of the relative phase of fin flapping of the swimmers and the simulation results indicate that the phase of the fin flapping is a dominant factor in both thrust enhancement and the total sound radiated into the far-field by a group of swimmers. For fish in the “diamond” configuration, a suitable combination of the relative phase difference between pairs of leading fish and trailing fish can result in better swimming performance with significantly lower hydroacoustic noise.Keywords: fish schooling, biopropulsion, hydrodynamics, hydroacoustics
Procedia PDF Downloads 6138 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 27037 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)
Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami
Abstract:
Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives
Procedia PDF Downloads 22136 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 2335 Process Flows and Risk Analysis for the Global E-SMC
Authors: Taeho Park, Ming Zhou, Sangryul Shim
Abstract:
With the emergence of the global economy, today’s business environment is getting more competitive than ever in the past. And many supply chain (SC) strategies and operations have significantly been altered over the past decade to overcome more complexities and risks imposed onto the global business. First, offshoring and outsourcing are more adopted as operational strategies. Manufacturing continues to move to better locations for enhancing competitiveness. Second, international operations are a challenge to a company’s SC system. Third, the products traded in the SC system are not just physical goods, but also digital goods (e.g., software, e-books, music, video materials). There are three main flows involved in fulfilling the activities in the SC system: physical flow, information flow, and financial flow. An advance of the Internet and electronic communication technologies has enabled companies to perform the flows of SC activities in electronic formats, resulting in the advent of an electronic supply chain management (e-SCM) system. A SC system for digital goods is somewhat different from the supply chain system for physical goods. However, it involves many similar or identical SC activities and flows. For example, like the production of physical goods, many third parties are also involved in producing digital goods for the production of components and even final products. This research aims at identifying process flows of both physical and digital goods in a SC system, and then investigating all risk elements involved in the physical, information, and financial flows during the fulfilment of SC activities. There are many risks inherent in the e-SCM system. Some risks may have severe impact on a company’s business, and some occur frequently but are not detrimental enough to jeopardize a company. Thus, companies should assess the impact and frequency of those risks, and then prioritize them in terms of their severity, frequency, budget, and time in order to be carefully maintained. We found risks involved in the global trading of physical and digital goods in four different categories: environmental risk, strategic risk, technological risk, and operational risk. And then the significance of those risks was investigated through a survey. The survey asked companies about the frequency and severity of the identified risks. They were also asked whether they had faced those risks in the past. Since the characteristics and supply chain flows of digital goods are varying industry by industry and country by country, it is more meaningful and useful to analyze risks by industry and country. To this end, more data in each industry sector and country should be collected, which could be accomplished in the future research.Keywords: digital goods, e-SCM, risk analysis, supply chain flows
Procedia PDF Downloads 42134 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance
Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder
Abstract:
Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes
Procedia PDF Downloads 42933 Industry Symbiosis and Waste Glass Upgrading: A Feasibility Study in Liverpool Towards Circular Economy
Authors: Han-Mei Chen, Rongxin Zhou, Taige Wang
Abstract:
Glass is widely used in everyday life, from glass bottles for beverages to architectural glass for various forms of glazing. Although the mainstream of used glass is recycled in the UK, the single-use and then recycling procedure results in a lot of waste as it incorporates intact glass with smashing, re-melting, and remanufacturing. These processes bring massive energy consumption with a huge loss of high embodied energy and economic value, compared to re-use, which’s towards a ‘zero carbon’ target. As a tourism city, Liverpool has more glass bottle consumption than most less leisure-focused cities. It’s therefore vital for Liverpool to find an upgrading approach for the single-use glass bottles with low carbon output. This project aims to assess the feasibility of industrial symbiosis and upgrading the framework of glass and to investigate the ways of achieving them. It is significant to Liverpool’s future industrial strategy since it provides an opportunity to target economic recovery for post-COVID by industry symbiosis and up-grading waste management in Liverpool to respond to the climate emergency. In addition, it will influence the local government policy for glass bottle reuse and recycling in North West England and as a good practice to be further recommended to other areas of the UK. First, a critical literature review of glass waste strategies has been conducted in the UK and worldwide industrial symbiosis practices. Second, mapping, data collection, and analysis have shown the current life cycle chain and the strong links of glass reuse and upgrading potentials via site visits to 16 local waste recycling centres. The results of this research have demonstrated the understanding of the influence of key factors on the development of a circular industrial symbiosis business model for beverage glass bottles. The current waste management procedures of the glass bottle industry, its business model, supply chain, and material flow have been reviewed. The various potential opportunities for glass bottle up-valuing have been investigated towards an industrial symbiosis in Liverpool. Finally, an up-valuing business model has been developed for an industrial symbiosis framework of glass in Liverpool. For glass bottles, there are two possibilities 1) focus on upgrading processes towards re-use rather than single-use and recycling and 2) focus on ‘smart’ re-use and recycling, leading to optimised values in other sectors to create a wider industry symbiosis for a multi-level and circular economy.Keywords: glass bottles, industry symbiosis, smart re-use, waste upgrading
Procedia PDF Downloads 10632 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 78