Search results for: Sophia Liang Zhou
70 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 30969 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission
Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao
Abstract:
Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid
Procedia PDF Downloads 30868 CFD-DEM Modelling of Liquid Fluidizations of Ellipsoidal Particles
Authors: Esmaeil Abbaszadeh Molaei, Zongyan Zhou, Aibing Yu
Abstract:
The applications of liquid fluidizations have been increased in many parts of industries such as particle classification, backwashing of granular filters, crystal growth, leaching and washing, and bioreactors due to high-efficient liquid–solid contact, favorable mass and heat transfer, high operation flexibilities, and reduced back mixing of phases. In most of these multiphase operations the particles properties, i.e. size, density, and shape, may change during the process because of attrition, coalescence or chemical reactions. Previous studies, either experimentally or numerically, mainly have focused on studies of liquid-solid fluidized beds containing spherical particles; however, the role of particle shape on the hydrodynamics of liquid fluidized beds is still not well-known. A three-dimensional Discrete Element Model (DEM) and Computational Fluid Dynamics (CFD) are coupled to study the influence of particles shape on particles and liquid flow patterns in liquid-solid fluidized beds. In the simulations, ellipsoid particles are used to study the shape factor since they can represent a wide range of particles shape from oblate and sphere to prolate shape particles. Different particle shapes from oblate (disk shape) to elongated particles (rod shape) are selected to investigate the effect of aspect ratio on different flow characteristics such as general particles and liquid flow pattern, pressure drop, and particles orientation. First, the model is verified based on experimental observations, then further detail analyses are made. It was found that spherical particles showed a uniform particle distribution in the bed, which resulted in uniform pressure drop along the bed height. However for particles with aspect ratios less than one (disk-shape), some particles were carried into the freeboard region, and the interface between the bed and freeboard was not easy to be determined. A few particle also intended to leave the bed. On the other hand, prolate particles showed different behaviour in the bed. They caused unstable interface and some flow channeling was observed for low liquid velocities. Because of the non-uniform particles flow pattern for particles with aspect ratios lower (oblate) and more (prolate) than one, the pressure drop distribution in the bed was not observed as uniform as what was found for spherical particles.Keywords: CFD, DEM, ellipsoid, fluidization, multiphase flow, non-spherical, simulation
Procedia PDF Downloads 30967 The Sapir-Whorf Hypothesis and Multicultural Effects on Translators: A Case Study from Chinese Ethnic Minority Literature
Authors: Yuqiao Zhou
Abstract:
The Sapir-Whorf hypothesis (SWH) emphasizes the effect produced by language on people’s minds. According to linguistic relativity, language has evolved over the course of human life on earth, and, in turn, the acquisition of language shapes learners’ thoughts. Despite much attention drawn by SWH, few scholars have attempted to analyse people’s thoughts via their literary works. And yet, the linguistic choices that create a narrative can enable us to examine its writer’s thoughts. Still, less work has been done on the impact of language on the minds of bilingual people. Internationalization has resulted in an increasing number of bilingual and multilingual individuals. In China, where more than one hundred languages are used for communication, most people are bilingual in Mandarin Chinese (the official language of China) and their own dialect. Taking as its corpus the ethnic minority myth of Ge Sa-er Wang by Alai and its English translation by Goldblatt and Lin, this paper aims to analyse the effects of culture on bilingual people’s minds. It will first analyse Alai’s thoughts on using the original version of Ge Sa-er Wang; next, it will examine the thoughts of the two translators by looking at translation choices made in the English version; finally, it will compare the cultural influences evident in the thoughts of Alai, and Goldblatt and Lin. Whereas Alai can speak two Sino-Tibetan languages – Mandarin Chinese and Tibetan – Goldblatt and Lin can speak two languages from different families – Mandarin Chinese (a Sino-Tibetan language) and English (an Indo-European language). The results reveal two systems of thought existing in the translators’ minds; Alai’s text, on the other hand, does not reveal a significant influence from North China, where Mandarin Chinese originated. The findings reveal the inconsistency of a second language’s influence on people’s minds. Notably, they suggest that the more different the two languages are, the greater the influence produced by the second language culture on people’s thoughts. It is hoped that this research will expand the scope of SWH as well as shed light on future translation studies on ethnic minority literature.Keywords: Sapir-Whorf hypothesis, cultural translation, cultural-specific items, Ge Sa-er Wang, ethnic minority literature, Tibet
Procedia PDF Downloads 11266 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian
Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu
Abstract:
Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS
Procedia PDF Downloads 22165 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 3664 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory
Authors: Liqin Zhang, Liang Yan
Abstract:
This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization
Procedia PDF Downloads 12463 A Single-Use Endoscopy System for Identification of Abnormalities in the Distal Oesophagus of Individuals with Chronic Reflux
Authors: Nafiseh Mirabdolhosseini, Jerry Zhou, Vincent Ho
Abstract:
The dramatic global rise in acid reflux has also led to oesophageal adenocarcinoma (OAC) becoming the fastest-growing cancer in developed countries. While gastroscopy with biopsy is used to diagnose OAC patients, this labour-intensive and expensive process is not suitable for population screening. This study aims to design, develop, and implement a minimally invasive system to capture optical data of the distal oesophagus for rapid screening of potential abnormalities. To develop the system and understand user requirements, a user-centric approach was employed by utilising co-design strategies. Target users’ segments were identified, and 38 patients and 14 health providers were interviewed. Next, the technical requirements were developed based on consultations with the industry. A minimally invasive optical system was designed and developed considering patient comfort. This system consists of the sensing catheter, controller unit, and analysis program. Its procedure only takes 10 minutes to perform and does not require cleaning afterward since it has a single-use catheter. A prototype system was evaluated for safety and efficacy for both laboratory and clinical performance. This prototype performed successfully when submerged in simulated gastric fluid without showing evidence of erosion after 24 hours. The system effectively recorded a video of the mid-distal oesophagus of a healthy volunteer (34-year-old male). The recorded images were used to develop an automated program to identify abnormalities in the distal oesophagus. Further data from a larger clinical study will be used to train the automated program. This system allows for quick visual assessment of the lower oesophagus in primary care settings and can serve as a screening tool for oesophageal adenocarcinoma. In addition, this system is able to be coupled with 24hr ambulatory pH monitoring to better correlate oesophageal physiological changes with reflux symptoms. It also can provide additional information on lower oesophageal sphincter functions such as opening times and bolus retention.Keywords: endoscopy, MedTech, oesophageal adenocarcinoma, optical system, screening tool
Procedia PDF Downloads 8762 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 11861 A Framework of Virtualized Software Controller for Smart Manufacturing
Authors: Pin Xiu Chen, Shang Liang Chen
Abstract:
A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing
Procedia PDF Downloads 8260 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase
Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He
Abstract:
Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification
Procedia PDF Downloads 31159 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 2358 Being an English Language Teaching Assistant in China: Understanding the Identity Evolution of Early-Career English Teacher in Private Tutoring Schools
Authors: Zhou Congling
Abstract:
The integration of private tutoring has emerged as an indispensable facet in the acquisition of language proficiency beyond formal educational settings. Notably, there has been a discernible surge in the demand for private English tutoring, specifically geared towards the preparation for internationally recognized gatekeeping examinations, such as IELTS, TOEFL, GMAT, and GRE. This trajectory has engendered an escalating need for English Language Teaching Assistants (ELTAs) operating within the realm of Private Tutoring Schools (PTSs). The objective of this study is to unravel the intricate process by which these ELTAs formulate their professional identities in the nascent stages of their careers as English educators, as well as to delineate their perceptions regarding their professional trajectories. The construct of language teacher identity is inherently multifaceted, shaped by an amalgamation of individual, societal, and cultural determinants, exerting a profound influence on how language educators navigate their professional responsibilities. This investigation seeks to scrutinize the experiential and influential factors that mold the identities of ELTAs in PTSs, particularly post the culmination of their language-oriented academic programs. Employing a qualitative narrative inquiry approach, this study aims to delve into the nuanced understanding of how ELTAs conceptualize their professional identities and envision their future roles. The research methodology involves purposeful sampling and the conduct of in-depth, semi-structured interviews with ten participants. Data analysis will be conducted utilizing Barkhuizen’s Short Story Analysis, a method designed to explore a three-dimensional narrative space, elucidating the intricate interplay of personal experiences and societal contexts in shaping the identities of ELTAs. The anticipated outcomes of this study are poised to contribute substantively to a holistic comprehension of ELTA identity formation, holding practical implications for diverse stakeholders within the private tutoring sector. This research endeavors to furnish insights into strategies for the retention of ELTAs and the enhancement of overall service quality within PTSs.Keywords: China, English language teacher, narrative inquiry, private tutoring school, teacher identity
Procedia PDF Downloads 5657 Large-Scale Experimental and Numerical Studies on the Temperature Response of Main Cables and Suspenders in Bridge Fires
Authors: Shaokun Ge, Bart Merci, Fubao Zhou, Gao Liu, Ya Ni
Abstract:
This study investigates the thermal response of main cables and suspenders in suspension bridges subjected to vehicle fires, integrating large-scale gasoline pool fire experiments with numerical simulations. Focusing on a suspension bridge in China, the research examines the impact of wind speed, pool size, and lane position on flame dynamics and temperature distribution along the cables. The results indicate that higher wind speeds and larger pool sizes markedly increase the mass burning rate, causing flame deflection and non-uniform temperature distribution along the cables. Under a wind speed of 1.56 m/s, maximum temperatures reached approximately 960 ℃ near the base in emergency lane fires and 909 ℃ at 1.6 m height for slow lane fires, underscoring the heightened thermal risk from emergency lane fires. The study recommends a zoning strategy for cable fire protection, suggesting a 0-12.8 m protection zone with a target temperature of 1000 ℃ and a 12.8-20.8 m zone with a target temperature of 700 ℃, both with a 90-minute fire resistance. This approach, based on precise temperature distribution data from experimental and simulation results, provides a vital reference for the fire protection design of suspension bridge cables. Understanding cable temperature response during vehicle fires is crucial for developing fire protection systems, as it dictates necessary structural protection, fire resistance duration, and maximum temperatures for mitigation. Challenges of controlling environmental wind in large-scale fire tests are also addressed, along with a call for further research on fire behavior mechanisms and structural temperature response in cable-supported bridges under varying wind conditions. Conclusively, the proposed zoning strategy enhances the theoretical understanding of near-field temperature response in bridge fires, contributing significantly to the field by supporting the design of passive fire protection systems for bridge cables, safeguarding their integrity under extreme fire conditions.Keywords: bridge fire, temperature response, large-scale experiment, numerical simulations, fire protection
Procedia PDF Downloads 1056 Hydrodynamics and Hydro-acoustics of Fish Schools: Insights from Computational Models
Authors: Ji Zhou, Jung Hee Seo, Rajat Mittal
Abstract:
Fish move in groups for foraging, reproduction, predator protection, and hydrodynamic efficiency. Schooling's predator protection involves the "many eyes" theory, which increases predator detection probability in a group. Reduced visual signature in a group scales with school size, offering per-capita protection. The ‘confusion effect’ makes it hard for predators to target prey in a group. These benefits, however, all focus on vision-based sensing, overlooking sound-based detection. Fish, including predators, possess sophisticated sensory systems for pressure waves and underwater sound. The lateral line system detects acoustic waves, while otolith organs sense infrasound, and sharks use an auditory system for low-frequency sounds. Among sound generation mechanisms of fish, the mechanism of dipole sound relates to hydrodynamic pressure forces on the body surface of the fish and this pressure would be affected by group swimming. Thus, swimming within a group could affect this hydrodynamic noise signature of fish and possibly serve as an additional protection afforded by schooling, but none of the studies to date have explored this effect. BAUVs with fin-like propulsors could reduce acoustic noise without compromising performance, addressing issues of anthropogenic noise pollution in marine environments. Therefore, in this study, we used our in-house immersed-boundary method flow and acoustic solver, ViCar3D, to simulate fish schools consisting of four swimmers in the classic ‘diamond’ configuration and discussed the feasibility of yielding higher swimming efficiency and controlling far-field sound signature of the school. We examine the effects of the relative phase of fin flapping of the swimmers and the simulation results indicate that the phase of the fin flapping is a dominant factor in both thrust enhancement and the total sound radiated into the far-field by a group of swimmers. For fish in the “diamond” configuration, a suitable combination of the relative phase difference between pairs of leading fish and trailing fish can result in better swimming performance with significantly lower hydroacoustic noise.Keywords: fish schooling, biopropulsion, hydrodynamics, hydroacoustics
Procedia PDF Downloads 6155 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 27054 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System
Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu
Abstract:
Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress
Procedia PDF Downloads 20953 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry
Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.Keywords: learning, education, imagining, pedagogy, dialogical teaching
Procedia PDF Downloads 9252 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)
Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami
Abstract:
Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives
Procedia PDF Downloads 22151 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 2350 Maternal Exposure to Bisphenol A and Its Association with Birth Outcomes
Authors: Yi-Ting Chen, Yu-Fang Huang, Pei-Wei Wang, Hai-Wei Liang, Chun-Hao Lai, Mei-Lien Chen
Abstract:
Background: Bisphenol A (BPA) is commonly used in consumer products, such as inner coatings of cans and polycarbonated bottles. BPA is considered to be an endocrine disrupting substance (EDs) that affects normal human hormones and may cause adverse effects on human health. Pregnant women and fetuses are susceptible groups of endocrine disrupting substances. Prenatal exposure to BPA has been shown to affect the fetus through the placenta. Therefore, it is important to evaluate the potential health risk of fetal exposure to BPA during pregnancy. The aims of this study were (1) to determine the urinary concentration of BPA in pregnant women, and (2) to investigate the association between BPA exposure during pregnancy and birth outcomes. Methods: This study recruited 117 pregnant women and their fetuses from 2012 to 2014 from the Taiwan Maternal- Infant Cohort Study (TMICS). Maternal urine samples were collected in the third trimester and questionnaires were used to collect socio-demographic characteristics, eating habits and medical conditions of the participants. Information about birth outcomes of the fetus was obtained from medical records. As for chemicals analysis, BPA concentrations in urine were determined by off-line solid-phase extraction-ultra-performance liquid chromatography coupled with a Q-Tof mass spectrometer. The urinary concentrations were adjusted with creatinine. The association between maternal concentrations of BPA and birth outcomes was estimated using the logistic regression model. Results: The detection rate of BPA is 99%; the concentration ranges (μg/g) from 0.16 to 46.90. The mean (SD) BPA levels are 5.37(6.42) μg/g creatinine. The mean ±SD of the body weight, body length, head circumference, chest circumference and gestational age at birth are 3105.18 ± 339.53 g, 49.33 ± 1.90 cm, 34.16 ± 1.06 cm, 32.34 ± 1.37 cm and 38.58 ± 1.37 weeks, respectively. After stratifying the exposure levels into two groups by median, pregnant women in higher exposure group would have an increased risk of lower body weight (OR=0.57, 95%CI=0.271-1.193), smaller chest circumference (OR=0.70, 95%CI=0.335-1.47) and shorter gestational age at birth newborn (OR=0.46, 95%CI=0.191-1.114). However, there are no associations between BPA concentration and birth outcomes reach a significant level (p < 0.05) in statistics. Conclusions: This study presents prenatal BPA profiles and infants in northern Taiwan. Women who have higher BPA concentrations tend to give birth to lower body weight, smaller chest circumference or shorter gestational age at birth newborn. More data will be included to verify the results. This report will also present the predictors of BPA concentrations for pregnant women.Keywords: bisphenol A, birth outcomes, biomonitoring, prenatal exposure
Procedia PDF Downloads 14349 Process Flows and Risk Analysis for the Global E-SMC
Authors: Taeho Park, Ming Zhou, Sangryul Shim
Abstract:
With the emergence of the global economy, today’s business environment is getting more competitive than ever in the past. And many supply chain (SC) strategies and operations have significantly been altered over the past decade to overcome more complexities and risks imposed onto the global business. First, offshoring and outsourcing are more adopted as operational strategies. Manufacturing continues to move to better locations for enhancing competitiveness. Second, international operations are a challenge to a company’s SC system. Third, the products traded in the SC system are not just physical goods, but also digital goods (e.g., software, e-books, music, video materials). There are three main flows involved in fulfilling the activities in the SC system: physical flow, information flow, and financial flow. An advance of the Internet and electronic communication technologies has enabled companies to perform the flows of SC activities in electronic formats, resulting in the advent of an electronic supply chain management (e-SCM) system. A SC system for digital goods is somewhat different from the supply chain system for physical goods. However, it involves many similar or identical SC activities and flows. For example, like the production of physical goods, many third parties are also involved in producing digital goods for the production of components and even final products. This research aims at identifying process flows of both physical and digital goods in a SC system, and then investigating all risk elements involved in the physical, information, and financial flows during the fulfilment of SC activities. There are many risks inherent in the e-SCM system. Some risks may have severe impact on a company’s business, and some occur frequently but are not detrimental enough to jeopardize a company. Thus, companies should assess the impact and frequency of those risks, and then prioritize them in terms of their severity, frequency, budget, and time in order to be carefully maintained. We found risks involved in the global trading of physical and digital goods in four different categories: environmental risk, strategic risk, technological risk, and operational risk. And then the significance of those risks was investigated through a survey. The survey asked companies about the frequency and severity of the identified risks. They were also asked whether they had faced those risks in the past. Since the characteristics and supply chain flows of digital goods are varying industry by industry and country by country, it is more meaningful and useful to analyze risks by industry and country. To this end, more data in each industry sector and country should be collected, which could be accomplished in the future research.Keywords: digital goods, e-SCM, risk analysis, supply chain flows
Procedia PDF Downloads 42148 Quantifying Processes of Relating Skills in Learning: The Map of Dialogical Inquiry
Authors: Eunice Gan Ghee Wu, Marcus Goh Tian Xi, Alicia Chua Si Wen, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual basis of learning processes. According to the Map, dialogical inquiry motivates complex thinking, dialogue, reflection, and learner agency. For instance, classrooms that incorporated dialogical inquiry enabled learners to construct more meaning in their learning, to engage in self-reflection, and to challenge their ideas with different perspectives. While the Map contributes to the psychology of learning, its qualitative approach makes it hard to track and compare learning processes over time for both teachers and learners. Qualitative approach typically relies on open-ended responses, which can be time-consuming and resource-intensive. With these concerns, the present research aimed to develop and validate a quantifiable measure for the Map. Specifically, the Map of Dialogical Inquiry reflects the eight different learning processes and perspectives employed during a learner’s experience. With a focus on interpersonal and emotional learning processes, the purpose of the present study is to construct and validate a scale to measure the “Relating” aspect of learning. According to the Map, the Relating aspect of learning contains four conceptual components: using intuition and empathy, seeking personal meaning, building relationships and meaning with others, and likes stories and metaphors. All components have been shown to benefit learning in past research. This research began with a literature review with the goal of identifying relevant scales in the literature. These scales were used as a basis for item development, guided by the four conceptual dimensions in the “Relating” aspect of learning, resulting in a pool of 47 preliminary items. Then, all items were administered to 200 American participants via an online survey along with other scales of learning. Dimensionality, reliability, and validity of the “Relating” scale was assessed. Data were submitted to a confirmatory factor analysis (CFA), revealing four distinct components and items. Items with lower factor loadings were removed in an iterative manner, resulting in 34 items in the final scale. CFA also revealed that the “Relating” scale was a four-factor model, following its four distinct components as described in the Map of Dialogical Inquiry. In sum, this research was able to develop a quantitative scale for the “Relating” aspect of the Map of Dialogical Inquiry. By representing learning as numbers, users, such as educators and learners, can better track, evaluate, and compare learning processes over time in an efficient manner. More broadly, this scale may also be used as a learning tool in lifelong learning.Keywords: lifelong learning, scale development, dialogical inquiry, relating, social and emotional learning, socio-affective intuition, empathy, narrative identity, perspective taking, self-disclosure
Procedia PDF Downloads 14247 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance
Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder
Abstract:
Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes
Procedia PDF Downloads 42946 Low SPOP Expression and High MDM2 expression Are Associated with Tumor Progression and Predict Poor Prognosis in Hepatocellular Carcinoma
Authors: Chang Liang, Weizhi Gong, Yan Zhang
Abstract:
Purpose: Hepatocellular carcinoma (HCC) is a malignant tumor with a high mortality rate and poor prognosis worldwide. Murine double minute 2 (MDM2) regulates the tumor suppressor p53, increasing cancer risk and accelerating tumor progression. Speckle-type POX virus and zinc finger protein (SPOP), a key of subunit of Cullin-Ring E3 ligase, inhibits tumor genesis and progression by the ubiquitination of its downstream substrates. This study aimed to clarify whether SPOP and MDM2 are mutually regulated in HCC and the correlation between SPOP and MDM2 and the prognosis of HCC patients. Methods: First, the expression of SPOP and MDM2 in HCC tissues were detected by TCGA database. Then, 53 paired samples of HCC tumor and adjacent tissues were collected to evaluate the expression of SPOP and MDM2 using immunohistochemistry. Chi-square test or Fisher’s exact test were used to analyze the relationship between clinicopathological features and the expression levels of SPOP and MDM2. In addition, Kaplan‒Meier curve analysis and log-rank test were used to investigate the effects of SPOP and MDM2 on the survival of HCC patients. Last, the Multivariate Cox proportional risk regression model analyzed whether the different expression levels of SPOP and MDM2 were independent risk factors for the prognosis of HCC patients. Results: Bioinformatics analysis revealed the low expression of SPOP and high expression of MDM2 were related to worse prognosis of HCC patients. The relationship between the expression of SPOP and MDM2 and tumor stem-like features showed an opposite trend. The immunohistochemistry showed the expression of SPOP protein was significantly downregulated while MDM2 protein significantly upregulated in HCC tissue compared to that in para-cancerous tissue. Tumors with low SPOP expression were related to worse T stage and Barcelona Clinic Liver Cancer (BCLC) stage, but tumors with high MDM2 expression were related to worse T stage, M stage, and BCLC stage. Kaplan–Meier curves showed HCC patients with high SPOP expression and low MDM2 expression had better survival than those with low SPOP expression and high MDM2 expression (P < 0.05). A multivariate Cox proportional risk regression model confirmed that a high MDM2 expression level was an independent risk factor for poor prognosis in HCC patients (P <0.05). Conclusion: The expression of SPOP protein was significantly downregulated, while the expression of MDM2 significantly upregulated in HCC. The low expression of SPOP and high expression. of MDM2 were associated with malignant progression and poor prognosis of HCC patients, indicating a potential therapeutic target for HCC patients.Keywords: hepatocellular carcinoma, murine double minute 2, speckle-type POX virus and zinc finger protein, ubiquitination
Procedia PDF Downloads 14345 Industry Symbiosis and Waste Glass Upgrading: A Feasibility Study in Liverpool Towards Circular Economy
Authors: Han-Mei Chen, Rongxin Zhou, Taige Wang
Abstract:
Glass is widely used in everyday life, from glass bottles for beverages to architectural glass for various forms of glazing. Although the mainstream of used glass is recycled in the UK, the single-use and then recycling procedure results in a lot of waste as it incorporates intact glass with smashing, re-melting, and remanufacturing. These processes bring massive energy consumption with a huge loss of high embodied energy and economic value, compared to re-use, which’s towards a ‘zero carbon’ target. As a tourism city, Liverpool has more glass bottle consumption than most less leisure-focused cities. It’s therefore vital for Liverpool to find an upgrading approach for the single-use glass bottles with low carbon output. This project aims to assess the feasibility of industrial symbiosis and upgrading the framework of glass and to investigate the ways of achieving them. It is significant to Liverpool’s future industrial strategy since it provides an opportunity to target economic recovery for post-COVID by industry symbiosis and up-grading waste management in Liverpool to respond to the climate emergency. In addition, it will influence the local government policy for glass bottle reuse and recycling in North West England and as a good practice to be further recommended to other areas of the UK. First, a critical literature review of glass waste strategies has been conducted in the UK and worldwide industrial symbiosis practices. Second, mapping, data collection, and analysis have shown the current life cycle chain and the strong links of glass reuse and upgrading potentials via site visits to 16 local waste recycling centres. The results of this research have demonstrated the understanding of the influence of key factors on the development of a circular industrial symbiosis business model for beverage glass bottles. The current waste management procedures of the glass bottle industry, its business model, supply chain, and material flow have been reviewed. The various potential opportunities for glass bottle up-valuing have been investigated towards an industrial symbiosis in Liverpool. Finally, an up-valuing business model has been developed for an industrial symbiosis framework of glass in Liverpool. For glass bottles, there are two possibilities 1) focus on upgrading processes towards re-use rather than single-use and recycling and 2) focus on ‘smart’ re-use and recycling, leading to optimised values in other sectors to create a wider industry symbiosis for a multi-level and circular economy.Keywords: glass bottles, industry symbiosis, smart re-use, waste upgrading
Procedia PDF Downloads 10644 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation
Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang
Abstract:
The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation
Procedia PDF Downloads 14943 Cell Adhesion, Morphology and Cytokine Expression of Synoviocytes Can Be Altered on Different Nano-Topographic Oxidized Silicon Nanosponges
Authors: Hung-Chih Hsu, Pey-Jium Chang, Ching-Hsein Chen, Jer-Liang Andrew Yeh
Abstract:
Osteoarthritis (OA) is a common disorder in rehabilitation clinic. The main characteristics include joint pain, localized tenderness and enlargement, joint effusion, cartilage destruction, loss of adhesion of perichondrium, synovium hyperplasia. Synoviocytes inflammation might be a cause of local tenderness and effusion. Inflammation cytokines might also play an important role in joint pain, cartilage destruction, decrease adhesion of perichondrium to the bone. Treatments of osteoarthritis include non-steroid anti-inflammation drugs (NSAID), glucosamine supplementation, hyaluronic acid, arthroscopic debridement, and total joint replacement. Total joint replacement is commonly used in patients with severe OA who failed respond to pharmacological treatment. However, some patients received surgery had serious adverse events, including instability of the implants due to insufficient adhesion to the adjacent bony tissue or synovial inflammation. We tried to develop ideal nano-topographic oxidized silicon nanosponges by using with various chemicals to produce thickness difference in nanometers in order to study more about the cell-environment interactions in vitro like the alterations of cell adhesion, morphology, extracellular matrix secretions in the pathogenesis of osteoarthritis. Cytokines studies like growth factor, reactive oxygen species, reactive inflammatory materials (Like nitrous oxide and prostaglandin E2), extracellular matrix (ECM) degradation enzymes, and synthesis of collagen will also be observed and discussed. Extracellular and intracellular expression transforming growth factor beta (TGF-β) will be studied by reverse transcription-polymerase chain reaction (RT-PCR). The degradation of ECM will be observed by the bioactivity ratio of matrix metalloproteinase (MMP) and tissue inhibitors of metalloproteinase by ELISA (Enzyme-linked immunosorbent assay). When rabbit synoviocytes were cultured on these nano-topographic structures, they demonstrate better cell adhesion rate, decreased expression of MMP-2,9 and PGE2, and increased expression of TGF-β when cultured in nano-topographic oxidized silicon nanosponges than in the planar oxidized silicon ones. These results show cell behavior, cytokine production can be influenced by physical characteristics from different nano-topographic structures. Our study demonstrates the possibility of manipulating cell behavior in these nano-topographic biomaterials.Keywords: osteoarthritis, synoviocyte, oxidized silicon surfaces, reactive oxygen species
Procedia PDF Downloads 38642 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 7841 A Description Analysis of Mortality Rate of Human Infection with Avian Influenza A(H7N9) Virus in China
Authors: Lei Zhou, Chao Li, Ruiqi Ren, Dan Li, Yali Wang, Daxin Ni, Zijian Feng, Qun Li
Abstract:
Background: Since the first human infection with avian influenza A(H7N9) case was reported in China on 31 March 2013, five epidemics have been observed in China through February 2013 and September 2017. Though the overall mortality rate of H7N9 has remained as high as around 40% throughout the five epidemics, the specific mortality rate in Mainland China varied by provinces. We conducted a descriptive analysis of mortality rates of H7N9 cases to explore the various severity features of the disease and then to provide clues of further analyses of potential factors associated with the severity of the disease. Methods: The data for analysis originated from the National Notifiable Infectious Disease Report and Surveillance System (NNIDRSS). The surveillance system and identification procedure for H7N9 infection have not changed in China since 2013. The definition of a confirmed H7N9 case is as same as previous reports. Mortality rates of H7N9 cases are described and compared by time and location of reporting, age and sex, and genetic features of H7N9 virus strains. Results: The overall mortality rate, the male and female specific overall rates of H7N9 is 39.6% (608/1533), 40.3% (432/1072) and 38.2% (176/461), respectively. There was no significant difference between the mortality rates of male and female. The age-specific mortality rates are significantly varied by age groups (χ²=38.16, p < 0.001). The mortality of H7N9 cases in the age group between 20 and 60 (33.17%) and age group of over 60 (51.16%) is much higher than that in the age group of under 20 (5.00%). Considering the time of reporting, the mortality rates of cases which were reported in the first (40.57%) and fourth (42.51%) quarters of each year are significantly higher than the mortality of cases which were reported in the second (36.02%) and third (27.27%) quarters (χ²=75.18, p < 0.001). The geographic specific mortality rates vary too. The mortality rates of H7N9 cases reported from the Northeast China (66.67%) and Westeast China (56.52%) are significantly higher than that of H7N9 cases reported from the remained area of mainland China. The mortality rate of H7N9 cases reported from the Central China is the lowest (34.38%). The mortality rates of H7N9 cases reported from rural (37.76%) and urban (38.96%) areas are similar. The mortality rate of H7N9 cases infected with the highly pathogenic avian influenza A(H7N9) virus (48.15%) is higher than the rate of H7N9 cases infected with the low pathogenic avian influenza A(H7N9) virus (37.57%), but the difference is not statistically significant. Preliminary analyses showed that age and some clinical complications such as respiratory failure, heart failure, and septic shock could be potential risk factors associated with the death of H7N9 cases. Conclusions: The mortality rates of H7N9 cases varied by age, sex, time of reporting and geographical location in mainland China. Further in-depth analyses and field investigations of the factors associated with the severity of H7N9 cases need to be considered.Keywords: H7N9 virus, Avian Influenza, mortality, China
Procedia PDF Downloads 243