Search results for: non uniform utility computing
474 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves
Procedia PDF Downloads 89473 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly
Authors: Alex Eldo Simon, Abhishek Yadav
Abstract:
This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio
Procedia PDF Downloads 81472 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration
Authors: Smaran Manchala
Abstract:
Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization
Procedia PDF Downloads 27471 Facilitating Knowledge Transfer for New Product Development in Portfolio Entrepreneurship: A Case Study of a Sodium-Ion Battery Start-up in China
Authors: Guohong Wang, Hao Huang, Rui Xing, Liyan Tang, Yu Wang
Abstract:
Start-ups are consistently under pressure to overcome liabilities of newness and smallness. They must focus on assembling resource and engaging constant renewal and repeated entrepreneurial activities to survive and grow. As an important form of resource, knowledge is constantly vital to start-ups, which will help start-ups with developing new product in hence forming competitive advantage. However, significant knowledge is usually needed to be identified and exploited from external entities, which makes it difficult to achieve knowledge transfer; with limited resources, it can be quite challenging for start-ups balancing the exploration and exploitation of knowledge. The research on knowledge transfer has become a relatively well-developed domain by indicating that knowledge transfer can be achieved through plenty of patterns, yet it is still under-explored that what processes and organizational practices help start-ups facilitating knowledge transfer for new product in the context portfolio entrepreneurship. Resource orchestration theory emphasizes the initiative and active management of company or the manager to explain the fulfillment of resource utility, which will help understand the process of managing knowledge as a certain kind of resource in start-ups. Drawing on the resource orchestration theory, this research aims to explore how knowledge transfer can be facilitated through resource orchestration. A qualitative single-case study of a sodium-ion battery new venture was conducted. The case company is sampled deliberately from representative industrial agglomeration areas in Liaoning Province, China. It is found that distinctive resource orchestration sub-processes are leveraged to facilitate knowledge transfer: (i) resource structuring makes knowledge available across the portfolio; (ii) resource bundling makes combines internal and external knowledge to form new knowledge; and (iii) resource harmonizing balances specific knowledge configurations across the portfolio. Meanwhile, by purposefully reallocating knowledge configurations to new product development in a certain new venture (exploration) and gradually adjusting knowledge configurations to being applied to existing products across the portfolio (exploitation), resource orchestration processes as a whole make exploration and exploitation of knowledge balanced. This study contributes to the knowledge management literature through proposing a resource orchestration view and depicting how knowledge transfer can be facilitated through different resource orchestration processes and mechanisms. In addition, by revealing the balancing process of exploration and exploitation of knowledge, and laying stress on the significance of the idea of making exploration and exploitation of knowledge balanced in the context of portfolio entrepreneurship, this study also adds specific efforts to entrepreneurship and strategy management literature.Keywords: exploration and exploitation, knowledge transfer, new product development, portfolio entrepreneur, resource orchestration
Procedia PDF Downloads 126470 Air Breakdown Voltage Prediction in Post-arcing Conditions for Compact Circuit Breakers
Authors: Jing Nan
Abstract:
The air breakdown voltage in compact circuit breakers is a critical factor in the design and reliability of electrical distribution systems. This voltage determines the threshold at which the air insulation between conductors will fail or 'break down,' leading to an arc. This phenomenon is highly sensitive to the conditions within the breaker, such as the temperature and the distance between electrodes. Typically, air breakdown voltage models have been reliable for predicting failure under standard operational temperatures. However, in conditions post-arcing, where temperatures can soar above 2000K, these models face challenges due to the complex physics of ionization and electron behaviour at such high-energy states. Building upon the foundational understanding that the breakdown mechanism is initiated by free electrons and propelled by electric fields, which lead to ionization and, potentially, to avalanche or streamer formation, we acknowledge the complexity introduced by high-temperature environments. Recognizing the limitations of existing experimental data, a notable research gap exists in the accurate prediction of breakdown voltage at elevated temperatures, typically observed post-arcing, where temperatures exceed 2000K.To bridge this knowledge gap, we present a method that integrates gap distance and high-temperature effects into air breakdown voltage assessment. The proposed model is grounded in the physics of ionization, accounting for the dynamic behaviour of free electrons which, under intense electric fields at elevated temperatures, lead to thermal ionization and potentially reach the threshold for streamer formation as Meek's criterion. Employing the Saha equation, our model calculates equilibrium electron densities, adapting to the atmospheric pressure and the hot temperature regions indicative of post-arc temperature conditions. Our model is rigorously validated against established experimental data, demonstrating substantial improvements in predicting air breakdown voltage in the high-temperature regime. This work significantly improves the predictive power for air breakdown voltage under conditions that closely mimic operational stressors in compact circuit breakers. Looking ahead, the proposed methods are poised for further exploration in alternative insulating media, like SF6, enhancing the model's utility for a broader range of insulation technologies and contributing to the future of high-temperature electrical insulation research.Keywords: air breakdown voltage, high-temperature insulation, compact circuit breakers, electrical discharge, saha equation
Procedia PDF Downloads 84469 Large-Scale Experimental and Numerical Studies on the Temperature Response of Main Cables and Suspenders in Bridge Fires
Authors: Shaokun Ge, Bart Merci, Fubao Zhou, Gao Liu, Ya Ni
Abstract:
This study investigates the thermal response of main cables and suspenders in suspension bridges subjected to vehicle fires, integrating large-scale gasoline pool fire experiments with numerical simulations. Focusing on a suspension bridge in China, the research examines the impact of wind speed, pool size, and lane position on flame dynamics and temperature distribution along the cables. The results indicate that higher wind speeds and larger pool sizes markedly increase the mass burning rate, causing flame deflection and non-uniform temperature distribution along the cables. Under a wind speed of 1.56 m/s, maximum temperatures reached approximately 960 ℃ near the base in emergency lane fires and 909 ℃ at 1.6 m height for slow lane fires, underscoring the heightened thermal risk from emergency lane fires. The study recommends a zoning strategy for cable fire protection, suggesting a 0-12.8 m protection zone with a target temperature of 1000 ℃ and a 12.8-20.8 m zone with a target temperature of 700 ℃, both with a 90-minute fire resistance. This approach, based on precise temperature distribution data from experimental and simulation results, provides a vital reference for the fire protection design of suspension bridge cables. Understanding cable temperature response during vehicle fires is crucial for developing fire protection systems, as it dictates necessary structural protection, fire resistance duration, and maximum temperatures for mitigation. Challenges of controlling environmental wind in large-scale fire tests are also addressed, along with a call for further research on fire behavior mechanisms and structural temperature response in cable-supported bridges under varying wind conditions. Conclusively, the proposed zoning strategy enhances the theoretical understanding of near-field temperature response in bridge fires, contributing significantly to the field by supporting the design of passive fire protection systems for bridge cables, safeguarding their integrity under extreme fire conditions.Keywords: bridge fire, temperature response, large-scale experiment, numerical simulations, fire protection
Procedia PDF Downloads 16468 Development of Stretchable Woven Fabrics with Auxetic Behaviour
Authors: Adeel Zulifqar, Hong Hu
Abstract:
Auxetic fabrics are a special kind of textile materials which possess negative Poisson’s ratio. Opposite to most of the conventional fabrics, auxetic fabrics get bigger in the transversal direction when stretched or get smaller when compressed. Auxetic fabrics are superior to conventional fabrics because of their counterintuitive properties, such as enhanced porosity under the extension, excellent formability to a curved surface and high energy absorption ability. Up till today, auxetic fabrics have been produced based on two approaches. The first approach involves using auxetic fibre or yarn and weaving technology to fabricate auxetic fabrics. The other method to fabricate the auxetic fabrics is by using non-auxetic yarns. This method has gained extraordinary curiosity of researcher in recent years. This method is based on realizing auxetic geometries into the fabric structure. In the woven fabric structure auxetic geometries can be realized by creating a differential shrinkage phenomenon into the fabric structural unit cell. This phenomenon can be created by using loose and tight weave combinations within the unit cell of interlacement pattern along with elastic and non-elastic yarns. Upon relaxation, the unit cell of interlacement pattern acquires a non-uniform shrinkage profile due to different shrinkage properties of loose and tight weaves in designed pattern, and the auxetic geometry is realized. The development of uni-stretch auxetic woven fabrics and bi-stretch auxetic woven fabrics by using this method has already been reported. This study reports the development of another kind of bi-stretch auxetic woven fabric. The fabric is first designed by transforming the auxetic geometry into interlacement pattern and then fabricated, using the available conventional weaving technology and non-auxetic elastic and non-elastic yarns. The tensile tests confirmed that the developed bi-stretch auxetic woven fabrics exhibit negative Poisson’s ratio over a wide range of tensile strain. Therefore, it can be concluded that the auxetic geometry can be realized into the woven fabric structure by creating the phenomenon of differential shrinkage and bi-stretch woven fabrics made of non-auxetic yarns having auxetic behavior and stretchability are possible can be obtained. Acknowledgement: This work was supported by the Research Grants Council of Hong Kong Special Administrative Region Government (grant number 15205514).Keywords: auxetic, differential shrinkage, negative Poisson's ratio, weaving, stretchable
Procedia PDF Downloads 151467 Bi-Criteria Vehicle Routing Problem for Possibility Environment
Authors: Bezhan Ghvaberidze
Abstract:
A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory
Procedia PDF Downloads 488466 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea
Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti
Abstract:
This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool. Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey
Procedia PDF Downloads 238465 An Application of Quantile Regression to Large-Scale Disaster Research
Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede
Abstract:
Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.Keywords: disaster workers, post traumatic stress, PTSD, quantile regression
Procedia PDF Downloads 285464 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon
Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn
Abstract:
The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.Keywords: land use and land cover change, change detection, image processing, support vector machines
Procedia PDF Downloads 140463 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact
Authors: Abdullah Almowanes
Abstract:
The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia
Procedia PDF Downloads 247462 Luminescent Dye-Doped Polymer Nanofibers Produced by Electrospinning Technique
Authors: Monica Enculescu, A. Evanghelidis, I. Enculescu
Abstract:
Among the numerous methods for obtaining polymer nanofibers, the electrospinning technique distinguishes itself due to the more growing interest induced by its proved utility leading to developing and improving of the method and the appearance of novel materials. In particular, production of polymeric nanofibers in which different dopants are introduced was intensively studied in the last years because of the increased interest for the obtaining of functional electrospun nanofibers. Electrospinning is a facile method of obtaining polymer nanofibers with diameters from tens of nanometers to micrometrical sizes that are cheap, flexible, scalable, functional and biocompatible. Besides the multiple applications in medicine, polymeric nanofibers obtained by electrospinning permit manipulation of light at nanometric dimensions when doped with organic dyes or different nanoparticles. It is a simple technique that uses an electrical field to draw fine polymer nanofibers from solutions and does not require complicated devices or high temperatures. Different morphologies of the electrospun nanofibers can be obtained for the same polymeric host when different parameters of the electrospinning process are used. Consequently, we can obtain tuneable optical properties of the electrospun nanofibers (e.g. changing the wavelength of the emission peak) by varying the parameters of the fabrication method. We focus on obtaining doped polymer nanofibers with enhanced optical properties using the electrospinning technique. The aim of the paper is to produce dye-doped polymer nanofibers’ mats incorporating uniformly dispersed dyes. Transmission and fluorescence of the fibers will be evaluated by spectroscopy methods. The morphological properties of the electrospun dye-doped polymer fibers will be evaluated using scanning electron microscopy (SEM). We will tailor the luminescent properties of the material by doping the polymer (polyvinylpyrrolidone or polymethylmetacrilate) with different dyes (coumarins, rhodamines and sulforhodamines). The tailoring will be made taking into consideration the possibility of changing the luminescent properties of electrospun polymeric nanofibers that are doped with different dyes by using different parameters for the electrospinning technique (electric voltage, distance between electrodes, flow rate of the solution, etc.). Furthermore, we can evaluated the influence of the concentration of the dyes on the emissive properties of dye-doped polymer nanofibers using different concentrations. The advantages offered by the electrospinning technique when producing polymeric fibers are given by the simplicity of the method, the tunability of the morphology allowed by the possibility of controlling all the process parameters (temperature, viscosity of polymeric solution, applied voltage, distance between electrodes, etc.), and by the absence of necessity of using harsh and supplementary chemicals such as the ones used in the traditional nanofabrication techniques. Acknowledgments: The authors acknowledge the financial support received through IFA CEA Project No. C5-08/2016.Keywords: electrospinning, luminescence, polymer nanofibers, scanning electron microscopy
Procedia PDF Downloads 214461 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization
Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford
Abstract:
The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator
Procedia PDF Downloads 133460 Optimizing the Field Emission Performance of SiNWs-Based Heterostructures: Controllable Synthesis, Core-Shell Structure, 3D ZnO/Si Nanotrees and Graphene/SiNWs
Authors: Shasha Lv, Zhengcao Li
Abstract:
Due to the CMOS compatibility, silicon-based field emission (FE) devices as potential electron sources have attracted much attention. The geometrical arrangement and dimensional features of aligned silicon nanowires (SiNWs) have a determining influence on the FE properties. We discuss a multistep template replication process of Ag-assisted chemical etching combined with polystyrene (PS) spheres to fabricate highly periodic and well-aligned silicon nanowires, then their diameter, aspect ratio and density were further controlled via dry oxidation and post chemical treatment. The FE properties related to proximity and aspect ratio were systematically studied. A remarkable improvement of FE propertiy was observed with the average nanowires tip interspace increasing from 80 to 820 nm. On the basis of adjusting SiNWs dimensions and morphology, addition of a secondary material whose properties complement the SiNWs could yield a combined characteristic. Three different nanoheterostructures were fabricated to control the FE performance, they are: NiSi/Si core-shell structures, ZnO/Si nanotrees, and Graphene/SiNWs. We successfully fabricated the high-quality NiSi/Si heterostructured nanowires with excellent conformality. First, nickle nanoparticles were deposited onto SiNWs, then rapid thermal annealing process were utilized to form NiSi shell. In addition, we demonstrate a new and simple method for creating 3D nanotree-like ZnO/Si nanocomposites with a spatially branched hierarchical structure. Compared with the as-prepared SiNRs and ZnO NWs, the high-density ZnO NWs on SiNRs have exhibited predominant FE characteristics, and the FE enhancement factors were attributed to band bending effect and geometrical morphology. The FE efficiency from flat sheet structure of graphene is low. We discussed an effective approach towards full control over the diameter of uniform SiNWs to adjust the protrusions of large-scale graphene sheet deposited on SiNWs. The FE performance regarding the uniformity and dimensional control of graphene protrusions supported on SiNWs was systematically clarified. Therefore, the hybrid SiNWs/graphene structures with protrusions provide a promising class of field emission cathodes.Keywords: field emission, silicon nanowires, heterostructures, controllable synthesis
Procedia PDF Downloads 273459 Field-Free Orbital Hall Current-Induced Deterministic Switching in the MO/Co₇₁Gd₂₉/Ru Structure
Authors: Zelalem Abebe Bekele, Kun Lei, Xiukai Lan, Xiangyu Liu, Hui Wen, Kaiyou Wang
Abstract:
Spin-polarized currents offer an efficient means of manipulating the magnetization of a ferromagnetic layer for big data and neuromorphic computing. Research has shown that the orbital Hall effect (OHE) can produce orbital currents, potentially surpassing the counter spin currents induced by the spin Hall effect. However, it’s essential to note that orbital currents alone cannot exert torque directly on a ferromagnetic layer, necessitating a conversion process from orbital to spin currents. Here, we present an efficient method for achieving perpendicularly magnetized spin-orbit torque (SOT) switching by harnessing the localized orbital Hall current generated from a Mo layer within a Mo/CoGd device. Our investigation reveals a remarkable enhancement in the interface-induced planar Hall effect (PHE) within the Mo/CoGd bilayer, resulting in the generation of a z-polarized planar current for manipulating the magnetization of CoGd layer without the need for an in-plane magnetic field. Furthermore, the Mo layer induces out-of-plane orbital current, boosting the in-plane and out-of-plane spin polarization by converting the orbital current into spin current within the dual-property CoGd layer. At the optimal Mo layer thickness, a low critical magnetization switching current density of 2.51×10⁶ A cm⁻² is achieved. This breakthrough opens avenues for all-electrical control energy-efficient magnetization switching through orbital current, advancing the field of spin-orbitronics.Keywords: spin-orbit torque, orbital hall effect, spin hall current, orbital hall current, interface-generated planar hall current, anisotropic magnetoresistance
Procedia PDF Downloads 57458 A Study of the Effect of Early and Late Meal Time on Anthropometric and Biochemical Parameters in Patients of Type 2 Diabetes
Authors: Smriti Rastogi, Narsingh Verma
Abstract:
Background: A vast body of research exists on the use of oral hypoglycaemic drugs, insulin injections and the like in managing diabetes but no such research exists that has taken into consideration the parameter of time restricted meal intake and its positive effects in managing diabetes. The utility of this project is immense as it offers a solution to the woes of diabetics based on circadian rhythm and normal physiology of the human body. Method: 80 Diabetics, enrolled from the Out Patient Department of Endocrinology, KGMU (King George's Medical University) were randomly divided based on consent to early dinner TRM(time restricted meal) group or not (control group). Follow up was done at six months and 12 months for anthropometric measurement, height, weight, waist-hip ratio, neck size, fasting, postprandial blood sugar, HbA1c, serum urea, serum creatinine, and lipid profile. The patient was given a clear understanding of chronomedicine and how it affects their health. A single intervention was done - the timing of dinner was at or around 7 pm for TRM group. Result: 65% of TRM group and 40 %(non- TRM) had normal HbA1c after 12 months. HbA1c in TRM Group (first visit to second follow up) had a significant p value=0.017. A p value of <0.0001 was observed on comparing the values of blood sugar (fasting) in TRM Group from the first visit and second follow up. The values of blood sugar (postprandial) in TRM Group (first visit and second follow up) showed a p-value <0.0001 (highly significant). Values of the three parameters were non- significant in the control group. Hip size(First Visit to Second Follow Up) TRM Group showed a p-value = 0.0344 (Significant) (Difference between means=2.762 ± 1.261)Detailed results of the above parameters and a few newer ones will be presented at the conference. Conclusion: Time restricted meal intake in diabetics shows promise and is worth exploring further. Time Restricted Meal intake in Type 2 diabetics has a significant effect in controlling and maintaining HbA1c as the reduction in HbA1c value was very significant in the TRM group vs. the control group. Similar highly significant results were obtained in the case of fasting and postprandial values of blood sugar in the TRM group when compared to the control group. The effects of time restricted meal intake in diabetics show promise and are worth exploring further. It is one of the first studies which have been undertaken in Indian diabetics, although the initial data obtained is encouraging yet further research and study are required to corroborate results.Keywords: chronomedicine, diabetes, endocrinology, time restricted meal intake
Procedia PDF Downloads 126457 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture
Authors: Sajjad Akbar, Rabia Bashir
Abstract:
With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.Keywords: agent based web content mining, content centric networking, information centric networking
Procedia PDF Downloads 475456 Embedded System of Signal Processing on FPGA: Underwater Application Architecture
Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad
Abstract:
The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing
Procedia PDF Downloads 79455 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond
Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu
Abstract:
Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density
Procedia PDF Downloads 440454 Synthesis and Characterization of AFe₂O₄ (A=CA, Co, CU) Nano-Spinels: Application to Hydrogen Photochemical Production under Visible Light Irradiation
Authors: H. Medjadji, A. Boulahouache, N. Salhi, A. Boudjemaa, M. Trari
Abstract:
Hydrogen from renewable sources, such as solar, is referred to as green hydrogen. The splitting water process using semiconductors, such as photocatalysts, has attracted significant attention due to its potential application for solving the energy crisis and environmental pollution. Spinel ferrites of the MF₂O₄ type have shown broad interest in diverse energy conversion processes, including fuel cells and photo electrocatalytic water splitting. This work focuses on preparing nano-spinels based on iron AFe₂O₄ (A= Ca, Co, and Cu) as photocatalysts using the nitrate method. These materials were characterized both physically and optically and subsequently tested for hydrogen generation under visible light irradiation. Various techniques were used to investigate the properties of the materials, including TGA-DT, X-ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), UV-visible spectroscopy, Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy (SEM-EDX) and X-ray Photoelectron Spectroscopy (XPS) was also undertaken. XRD analysis confirmed the formation of pure phases at 850°C, with crystalline sizes of 31 nm for CaFe₂O₄, 27 nm for CoFe₂O₄, and 40 nm for CuFe₂O₄. The energy gaps, calculated from recorded diffuse reflection data, are 1.85 eV for CaFe₂O₄, 1.27 eV for CoFe₂O₄, and 1.64 eV for CuFe₂O₄. SEM micrographs showed homogeneous grains with uniform shapes and medium porosity in all samples. EDX elemental analysis determined the absence of any contaminating elements, highlighting the high purity of the prepared materials via the nitrate route. XPS spectra revealed the presence of Fe3+ and O in all samples. Additionally, XPS analysis revealed the presence of Ca²⁺, Co²⁺, and Cu²⁺ on the surface of CaFe₂O₄ and CoFe₂O₄ spinels, respectively. The photocatalytic activity was successfully evaluated by measuring H₂ evolution through the water-splitting process. The best performance was achieved with CaFe₂O₄ in a neutral medium (pH ~ 7), yielding 189 µmol at an optimal temperature of ~50°C. The highest hydrogen production rates for CoFe₂O₄ and CuFe₂O₄ were obtained at pH ~ 12 with release rates of 65 and 85 µmol, respectively, under visible light irradiation at the same optimal temperature. Various conditions were investigated including the pH of the solution, the hole sensors utilization and recyclability.Keywords: hydrogen, MFe₂O₄, nitrate route, spinel ferrite
Procedia PDF Downloads 40453 Bioethanol Production from Wild Sorghum (Sorghum arundinacieum) and Spear Grass (Heteropogon contortus)
Authors: Adeyinka Adesanya, Isaac Bamgboye
Abstract:
There is a growing need to develop the processes to produce renewable fuels and chemicals due to the economic, political, and environmental concerns associated with fossil fuels. Lignocellulosic biomass is an excellent renewable feedstock because it is both abundant and inexpensive. This project aims at producing bioethanol from lignocellulosic plants (Sorghum Arundinacieum and Heteropogon Contortus) by biochemical means, computing the energy audit of the process and determining the fuel properties of the produced ethanol. Acid pretreatment (0.5% H2SO4 solution) and enzymatic hydrolysis (using malted barley as enzyme source) were employed. The ethanol yield of wild sorghum was found to be 20% while that of spear grass was 15%. The fuel properties of the bioethanol from wild sorghum are 1.227 centipoise for viscosity, 1.10 g/cm3 for density, 0.90 for specific gravity, 78 °C for boiling point and the cloud point was found to be below -30 °C. That of spear grass was 1.206 centipoise for viscosity, 0.93 g/cm3 for density 1.08 specific gravity, 78 °C for boiling point and the cloud point was also found to be below -30 °C. The energy audit shows that about 64 % of the total energy was used up during pretreatment, while product recovery which was done manually demanded about 31 % of the total energy. Enzymatic hydrolysis, fermentation, and distillation total energy input were 1.95 %, 1.49 % and 1.04 % respectively, the alcoholometric strength of bioethanol from wild sorghum was found to be 47 % and the alcoholometric strength of bioethanol from spear grass was 72 %. Also, the energy efficiency of the bioethanol production for both grasses was 3.85 %.Keywords: lignocellulosic biomass, wild sorghum, spear grass, biochemical conversion
Procedia PDF Downloads 236452 Controlling Shape and Position of Silicon Micro-nanorolls Fabricated using Fine Bubbles during Anodization
Authors: Yodai Ashikubo, Toshiaki Suzuki, Satoshi Kouya, Mitsuya Motohashi
Abstract:
Functional microstructures such as wires, fins, needles, and rolls are currently being applied to variety of high-performance devices. Under these conditions, a roll structure (silicon micro-nanoroll) was formed on the surface of the silicon substrate via fine bubbles during anodization using an extremely diluted hydrofluoric acid (HF + H₂O). The as-formed roll had a microscale length and width of approximately 1 µm. The number of rolls was 3-10 times and the thickness of the film forming the rolls was about 10 nm. Thus, it is promising for applications as a distinct device material. These rolls functioned as capsules and/or pipelines. To date, number of rolls and roll length have been controlled by anodization conditions. In general, controlling the position and roll winding state is required for device applications. However, it has not been discussed. Grooves formed on silicon surface before anodization might be useful control the bubbles. In this study, we investigated the effect of the grooves on the position and shape of the roll. The surfaces of the silicon wafers were anodized. The starting material was p-type (100) single-crystalline silicon wafers. The resistivity of the wafer is 5-20 ∙ cm. Grooves were formed on the surface of the substrate before anodization using sandpaper and diamond pen. The average width and depth of the grooves were approximately 1 µm and 0.1 µm, respectively. The HF concentration {HF/ (HF + C₂H5OH + H₂O)} was 0.001 % by volume. The C2H5OH concentration {C₂H5OH/ (HF + C₂H5OH + H₂O)} was 70 %. A vertical single-tank cell and Pt cathode were used for anodization. The silicon roll was observed by field-emission scanning electron microscopy (FE-SEM; JSM-7100, JEOL). The atomic bonding state of the rolls was evaluated using X-ray photoelectron spectroscopy (XPS; ESCA-3400, Shimadzu). For straight groove, the rolls were formed along the groove. This indicates that the orientation of the rolls can be controlled by the grooves. For lattice-like groove, the rolls formed inside the lattice and along the long sides. In other words, the aspect ratio of the lattice is very important for the roll formation. In addition, many rolls were formed and winding states were not uniform when the lattice size is too large. On the other hand, no rolls were formed for small lattice. These results indicate that there is the optimal size of lattice for roll formation. In the future, we are planning on formation of rolls using groove formed by lithography technique instead of sandpaper and the pen. Furthermore, the rolls included nanoparticles will be formed for nanodevices.Keywords: silicon roll, anodization, fine bubble, microstructure
Procedia PDF Downloads 25451 Digital Manufacturing: Evolution and a Process Oriented Approach to Align with Business Strategy
Authors: Abhimanyu Pati, Prabir K. Bandyopadhyay
Abstract:
The paper intends to highlight the significance of Digital Manufacturing (DM) strategy in support and achievement of business strategy and goals of any manufacturing organization. Towards this end, DM initiatives have been given a process perspective, while not undermining its technological significance, with a view to link its benefits directly with fulfilment of customer needs and expectations in a responsive and cost-effective manner. A digital process model has been proposed to categorize digitally enabled organizational processes with a view to create synergistic groups, which adopt and use digital tools having similar characteristics and functionalities. This will throw future opportunities for researchers and developers to create a unified technology environment for integration and orchestration of processes. Secondly, an effort has been made to apply “what” and “how” features of Quality Function Deployment (QFD) framework to establish the relationship between customers’ needs – both for external and internal customers, and the features of various digital processes, which support for the achievement of these customer expectations. The paper finally concludes that in the present highly competitive environment, business organizations cannot thrive to sustain unless they understand the significance of digital strategy and integrate it with their business strategy with a clearly defined implementation roadmap. A process-oriented approach to DM strategy will help business executives and leaders to appreciate its value propositions and its direct link to organization’s competitiveness.Keywords: knowledge management, cloud computing, knowledge management approaches, cloud-based knowledge management
Procedia PDF Downloads 310450 Computerized Analysis of Phonological Structure of 10,400 Brazilian Sign Language Signs
Authors: Wanessa G. Oliveira, Fernando C. Capovilla
Abstract:
Capovilla and Raphael’s Libras Dictionary documents a corpus of 4,200 Brazilian Sign Language (Libras) signs. Duduchi and Capovilla’s software SignTracking permits users to retrieve signs even when ignoring the gloss corresponding to it and to discover the meaning of all 4,200 signs sign simply by clicking on graphic menus of the sign characteristics (phonemes). Duduchi and Capovilla have discovered that the ease with which any given sign can be retrieved is an inverse function of the average popularity of its component phonemes. Thus, signs composed of rare (distinct) phonemes are easier to retrieve than are those composed of common phonemes. SignTracking offers a means of computing the average popularity of the phonemes that make up each one of 4,200 signs. It provides a precise measure of the degree of ease with which signs can be retrieved, and sign meanings can be discovered. Duduchi and Capovilla’s logarithmic model proved valid: The degree with which any given sign can be retrieved is an inverse function of the arithmetic mean of the logarithm of the popularity of each component phoneme. Capovilla, Raphael and Mauricio’s New Libras Dictionary documents a corpus of 10,400 Libras signs. The present analysis revealed Libras DNA structure by mapping the incidence of 501 sign phonemes resulting from the layered distribution of five parameters: 163 handshape phonemes (CherEmes-ManusIculi); 34 finger shape phonemes (DactilEmes-DigitumIculi); 55 hand placement phonemes (ArtrotoToposEmes-ArticulatiLocusIculi); 173 movement dimension phonemes (CinesEmes-MotusIculi) pertaining to direction, frequency, and type; and 76 Facial Expression phonemes (MascarEmes-PersonalIculi).Keywords: Brazilian sign language, lexical retrieval, libras sign, sign phonology
Procedia PDF Downloads 346449 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India
Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia
Abstract:
Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification
Procedia PDF Downloads 219448 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 95447 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast
Authors: Fernando M. Soto, Gaetano Di Mino
Abstract:
The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design
Procedia PDF Downloads 369446 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 279445 Fabrication of Al/Al2O3 Functionally Graded Composites via Centrifugal Method by Using a Polymeric Suspension
Authors: Majid Eslami
Abstract:
Functionally graded materials (FGMs) exhibit heterogeneous microstructures in which the composition and properties gently change in specified directions. The common type of FGMs consist of a metal in which ceramic particles are distributed with a graded concentration. There are many processing routes for FGMs. An important group of these methods is casting techniques (gravity or centrifugal). However, the main problem of casting molten metal slurry with dispersed ceramic particles is a destructive chemical reaction between these two phases which deteriorates the properties of the materials. In order to overcome this problem, in the present investigation a suspension of 6061 aluminum and alumina powders in a liquid polymer was used as the starting material and subjected to centrifugal force for making FGMs. The size rang of these powders was 45-63 and 106-125 μm. The volume percent of alumina in the Al/Al2O3 powder mixture was in the range of 5 to 20%. PMMA (Plexiglas) in different concentrations (20-50 g/lit) was dissolved in toluene and used as the suspension liquid. The glass mold contaning the suspension of Al/Al2O3 powders in the mentioned liquid was rotated at 1700 rpm for different times (4-40 min) while the arm length was kept constant (10 cm) for all the experiments. After curing the polymer, burning out the binder, cold pressing and sintering , cylindrical samples (φ=22 mm h=20 mm) were produced. The density of samples before and after sintering was quantified by Archimedes method. The results indicated that by using the same sized alumina and aluminum powders particles, FGM sample can be produced by rotation times exceeding 7 min. However, by using coarse alumina and fine alumina powders the sample exhibits step concentration. On the other hand, using fine alumina and coarse alumina results in a relatively uniform concentration of Al2O3 along the sample height. These results are attributed to the effects of size and density of different powders on the centrifugal force induced on the powders during rotation. The PMMA concentration and the vol.% of alumina in the suspension did not have any considerable effect on the distribution of alumina particles in the samples. The hardness profiles along the height of samples were affected by both the alumina vol.% and porosity content. The presence of alumina particles increased the hardness while increased porosity reduced the hardness. Therefore, the hardness values did not show the expected gradient in same sample. The sintering resulted in decreased porosity for all the samples investigated.Keywords: FGM, powder metallurgy, centrifugal method, polymeric suspension
Procedia PDF Downloads 211