Search results for: complex pain
547 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 131546 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes
Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld
Abstract:
Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.Keywords: frying oils, lipid oxidation products, frying episodes, chemometrics, principal component regression, NMR Analysis, cytotoxic/genotoxic aldehydes
Procedia PDF Downloads 171545 Genome-Wide Homozygosity Analysis of the Longevous Phenotype in the Amish Population
Authors: Sandra Smieszek, Jonathan Haines
Abstract:
Introduction: Numerous research efforts have focused on searching for ‘longevity genes’. However, attempting to decipher the genetic component of the longevous phenotype have resulted in limited success and the mechanisms governing longevity remain to be explained. We conducted a genome-wide homozygosity analysis (GWHA) of the founder population of the Amish community in central Ohio. While genome-wide association studies using unrelated individuals have revealed many interesting longevity associated variants, these variants are typically of small effect and cannot explain the observed patterns of heritability for this complex trait. The Amish provide a large cohort of extended kinships allowing for in depth analysis via family-based approach excellent population due to its. Heritability of longevity increases with age with significant genetic contribution being seen in individuals living beyond 60 years of age. In our present analysis we show that the heritability of longevity is estimated to be increasing with age particularly on the paternal side. Methods: The present analysis integrated both phenotypic and genotypic data and led to the discovery of a series of variants, distinct for stratified populations across ages and distinct for paternal and maternal cohorts. Specifically 5437 subjects were analyzed and a subset of 893 successfully genotyped individuals was used to assess CHIP heritability. We have conducted the homozygosity analysis to examine if homozygosity is associated with increased risk of living beyond 90. We analyzed AMISH cohort genotyped for 614,957 SNPs. Results: We delineated 10 significant regions of homozygosity (ROH) specific for the age group of interest (>90). Of particular interest was ROH on chromosome 13, P < 0.0001. The lead SNPs rs7318486 and rs9645914 point to COL4A2 and our lead SNP. COL25A1 encodes one of the six subunits of type IV collagen, the C-terminal portion of the protein, known as canstatin, is an inhibitor of angiogenesis and tumor growth. COL4A2 mutations have been reported with a broader spectrum of cerebrovascular, renal, ophthalmological, cardiac, and muscular abnormalities. The second region of interest points to IRS2. Furthermore we built a classifier using the obtained SNPs from the significant ROH region with 0.945 AUC giving ability to discriminate between those living beyond to 90 years of age and beyond. Conclusion: In conclusion our results suggest that a history of longevity does indeed contribute to increasing the odds of individual longevity. Preliminary results are consistent with conjecture that heritability of longevity is substantial when we start looking at oldest fifth and smaller percentiles of survival specifically in males. We will validate all the candidate variants in independent cohorts of centenarians, to test whether they are robustly associated with human longevity. The identified regions of interest via ROH analysis could be of profound importance for the understanding of genetic underpinnings of longevity.Keywords: regions of homozygosity, longevity, SNP, Amish
Procedia PDF Downloads 232544 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors
Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy
Abstract:
The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.Keywords: hyperkalemia, drift, AACVD, organosilicon
Procedia PDF Downloads 123543 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia
Authors: Merih Welay Welesilassie
Abstract:
Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray
Procedia PDF Downloads 65542 Listening to Voices: A Meaning-Focused Framework for Supporting People with Auditory Verbal Hallucinations
Authors: Amar Ghelani
Abstract:
People with auditory verbal hallucinations (AVH) who seek support from mental health services commonly report feeling unheard and invalidated in their interactions with social workers and psychiatric professionals. Current mental health training and clinical approaches have proven to be inadequate in addressing the complex nature of voice hearing. Childhood trauma is a key factor in the development of AVH and can render people more vulnerable to hearing both supportive and/or disturbing voices. Lived experiences of racism, poverty, and immigration are also associated with development of what is broadly classified as psychosis. Despite evidence affirming the influence of environmental factors on voice hearing, the Western biomedical system typically conceptualizes this experience as a symptom of genetically-based mental illnesses which requires diagnosis and treatment. Overemphasis on psychiatric medications, referrals, and directive approaches to people’s problems has shifted clinical interventions away from assessing and addressing problems directly related to AVH. The Maastricht approach offers voice hearers and mental health workers an alternative and respectful starting point for understanding and coping with voices. The approach was developed by voice hearers in partnership with mental health professionals and entails an innovative method to assess and create meaning from voice hearing and related life stressors. The objectives of the approach are to help people who hear voices: (1) understand the problems and/or people the voices may represent in their history, and (2) cope with distress and find solutions to related problems. The Maastricht approach has also been found to help voice hearers integrate emotional conflicts, reduce avoidance or fear associated with AVH, improve therapeutic relationships, and increase a sense of control over internal experiences. The proposed oral presentation will be guided by a recovery-oriented theoretical framework which suggests healing from psychological wounds occurs through social connections and community support systems. The presentation will start with a brainstorming exercise to identify participants pre-existing knowledge of the subject matter. This will lead into a literature review on the relations between trauma, intersectionality, and AVH. An overview of the Maastricht approach and review of research related to its therapeutic risks and benefits will follow. Participants will learn trauma-informed coping skills and questions which can help voice hearers make meaning from their experiences. The presentation will conclude with a review of resources and learning opportunities where participants can expand their knowledge of the Hearing Voices Movement and Maastricht approach.Keywords: Maastricht interview, recovery, therapeutic assessment, voice hearing
Procedia PDF Downloads 114541 The Role of Law in the Transformation of Collective Identities in Nigeria
Authors: Henry Okechukwu Onyeiwu
Abstract:
Nigeria, with its rich tapestry of ethnicities, cultures, and religions, serves as a critical case study in understanding how law influences and shapes collective identities. This abstract delves into the historical context of legal systems in Nigeria, examining the colonial legacies that have influenced contemporary laws and how these laws interact with traditional practices and beliefs. This study examines the critical role of law in shaping and transforming collective identities in Nigeria, a nation characterized by its rich tapestry of ethnicities, cultures, and religions. The legal framework in Nigeria has evolved in response to historical, social, and political dynamics, influencing the way communities perceive themselves and interact with one another. This research highlights the interplay between law and collective identity, exploring how legal instruments, such as constitutions, statutes, and judicial rulings, have contributed to the formation, negotiation, and reformation of group identities over time. Moreover, contemporary legal debates surrounding issues such as citizenship, resource allocation, and communal conflicts further illustrate the law's role in identity formation. The legal recognition of different ethnic groups fosters a sense of belonging and collective identity among these groups, yet it simultaneously raises questions about inclusivity and equality. Laws concerning indigenous rights and affirmative action are essential in this discourse, as they reflect the necessity of balancing majority rule with minority rights—a challenge that Nigeria continues to navigate. By employing a multidisciplinary approach that integrates legal studies, sociology, and anthropology, the study analyses key historical milestones, such as colonial legal legacies, post-independence constitutional developments, and ongoing debates surrounding federalism and ethnic rights. It also investigates how laws affect social cohesion and conflict among Nigeria's diverse ethnic groups, as well as the role of law in promoting inclusivity and recognizing minority rights. Case studies are utilized to illustrate practical examples of legal transformations and their impact on collective identities in various Nigerian contexts, including land rights, religious freedoms, and ethnic representation in government. The findings reveal that while the law has the potential to unify disparate groups under a national identity, it can also exacerbate divisions when applied inequitably or favouring particular groups over others. Ultimately, this study aims to shed light on the dual nature of law as both a tool for transformation and a potential source of conflict in the evolution of collective identities in Nigeria. By understanding these dynamics, policymakers and legal practitioners can develop strategies to foster unity and respect for diversity in a complex societal landscape.Keywords: law, collective identity, Nigeria, ethnicity, conflict, inclusion, legal framework, transformation
Procedia PDF Downloads 26540 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 129539 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique
Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham
Abstract:
Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT
Procedia PDF Downloads 189538 Using Computer Vision and Machine Learning to Improve Facility Design for Healthcare Facility Worker Safety
Authors: Hengameh Hosseini
Abstract:
Design of large healthcare facilities – such as hospitals, multi-service line clinics, and nursing facilities - that can accommodate patients with wide-ranging disabilities is a challenging endeavor and one that is poorly understood among healthcare facility managers, administrators, and executives. An even less-understood extension of this problem is the implications of weakly or insufficiently accommodative design of facilities for healthcare workers in physically-intensive jobs who may also suffer from a range of disabilities and who are therefore at increased risk of workplace accident and injury. Combine this reality with the vast range of facility types, ages, and designs, and the problem of universal accommodation becomes even more daunting and complex. In this study, we focus on the implication of facility design for healthcare workers suffering with low vision who also have physically active jobs. The points of difficulty are myriad and could span health service infrastructure, the equipment used in health facilities, and transport to and from appointments and other services can all pose a barrier to health care if they are inaccessible, less accessible, or even simply less comfortable for people with various disabilities. We conduct a series of surveys and interviews with employees and administrators of 7 facilities of a range of sizes and ownership models in the Northeastern United States and combine that corpus with in-facility observations and data collection to identify five major points of failure common to all the facilities that we concluded could pose safety threats to employees with vision impairments, ranging from very minor to severe. We determine that lack of design empathy is a major commonality among facility management and ownership. We subsequently propose three methods for remedying this lack of empathy-informed design, to remedy the dangers posed to employees: the use of an existing open-sourced Augmented Reality application to simulate the low-vision experience for designers and managers; the use of a machine learning model we develop to automatically infer facility shortcomings from large datasets of recorded patient and employee reviews and feedback; and the use of a computer vision model fine tuned on images of each facility to infer and predict facility features, locations, and workflows, that could again pose meaningful dangers to visually impaired employees of each facility. After conducting a series of real-world comparative experiments with each of these approaches, we conclude that each of these are viable solutions under particular sets of conditions, and finally characterize the range of facility types, workforce composition profiles, and work conditions under which each of these methods would be most apt and successful.Keywords: artificial intelligence, healthcare workers, facility design, disability, visually impaired, workplace safety
Procedia PDF Downloads 116537 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator
Authors: Jaeyoung Lee
Abstract:
Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network
Procedia PDF Downloads 129536 The Use of Prestige Language in Tennessee Williams’s "A Streetcar Named Desire"
Authors: Stuart Noel
Abstract:
In a streetcar Named Desire, Tennessee Williams presents Blanche DuBois, a most complex and intriguing character who often uses prestige language to project the image of an upper-class speaker and to disguise her darker and complicated self. She embodies various fascinating and contrasting characteristics. Like New Orleans (the locale of the play), Blanche represents two opposing images. One image projects that of genteel, Southern charm and beauty, speaking formally and using prestige language and what some linguists refer to as “hypercorrection,” and the other image reveals that of a soiled, deteriorating façade, full of decadence and illusion. Williams said on more than one occasion that Blanche’s use of such language was a direct reflection of her personality and character (as a high school English teacher). Prestige language is an exaggeratedly elevated, pretentious, and oftentimes melodramatic form of one’s language incorporating superstandard or more standard speech than usual in order to project a highly authoritative individual identity. Speech styles carry personal identification meaning not only because they are closely associated with certain social classes but because they tend to be associated with certain conversational contexts. Features which may be considered to be “elaborated” in form (for example, full forms vs. contractions) tend to cluster together in speech registers/styles which are typically considered to be more formal and/or of higher social prestige, such as academic lectures and news broadcasts. Members of higher social classes have access to the elaborated registers which characterize formal writings and pre-planned speech events, such as lectures, while members of lower classes are relegated to using the more economical registers associated with casual, face-to-face conversational interaction, since they do not participate in as many planned speech events as upper-class speakers. Tennessee Williams’s work is characteristically concerned with the conflict between the illusions of an individual and the reality of his/her situation equated with a conflict between truth and beauty. An examination of Blanche DuBois reveals a recurring theme of art and decay and the use of prestige language to reveal artistry in language and to hide a deteriorating self. His graceful and poetic writing personifies her downfall and deterioration. Her loneliness and disappointment are the things so often strongly feared by the sensitive artists and heroes in the world. Hers is also a special and delicate human spirit that is often misunderstood and repressed by society. Blanche is afflicted with a psychic illness growing out of her inability to face the harshness of human existence. She is a sensitive, artistic, and beauty-haunted creature who is avoiding her own humanity while hiding behind her use of prestige language. And she embodies a partial projection of Williams himself.Keywords: American drama, prestige language, Southern American literature, Tennessee Williams
Procedia PDF Downloads 372535 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy
Procedia PDF Downloads 75534 Cities Under Pressure: Unraveling Urban Resilience Challenges
Authors: Sherine S. Aly, Fahd A. Hemeida, Mohamed A. Elshamy
Abstract:
In the face of rapid urbanization and the myriad challenges posed by climate change, population growth, and socio-economic disparities, fostering urban resilience has become paramount. This abstract offers a comprehensive overview of the study on "Urban Resilience Challenges," exploring the background, methodologies, major findings, and concluding insights. The paper unveils a spectrum of challenges encompassing environmental stressors and deep-seated socio-economic issues, such as unequal access to resources and opportunities. Emphasizing their interconnected nature, the study underscores the imperative for holistic and integrated approaches to urban resilience, recognizing the intricate web of factors shaping the urban landscape. Urbanization has witnessed an unprecedented surge, transforming cities into dynamic and complex entities. With this growth, however, comes an array of challenges that threaten the sustainability and resilience of urban environments. This study seeks to unravel the multifaceted urban resilience challenges, exploring their origins and implications for contemporary cities. Cities serve as hubs of economic, social, and cultural activities, attracting diverse populations seeking opportunities and a higher quality of life. However, the urban fabric is increasingly strained by climate-related events, infrastructure vulnerabilities, and social inequalities. Understanding the nuances of these challenges is crucial for developing strategies that enhance urban resilience and ensure the longevity of cities as vibrant and adaptive entities. This paper endeavors to discern strategic guidelines for enhancing urban resilience amidst the dynamic challenges posed by rapid urbanization. The study aims to distill actionable insights that can inform strategic approaches. Guiding the formulation of effective strategies to fortify cities against multifaceted pressures. The study employs a multifaceted approach to dissect urban resilience challenges. A qualitative method will be employed, including comprehensive literature reviews and data analysis of urban vulnerabilities that provided valuable insights into the lived experiences of resilience challenges in diverse urban settings. In conclusion, this study underscores the urgency of addressing urban resilience challenges to ensure the sustained vitality of cities worldwide. The interconnected nature of these challenges necessitates a paradigm shift in urban planning and governance. By adopting holistic strategies that integrate environmental, social, and economic considerations, cities can navigate the complexities of the 21st century. The findings provide a roadmap for policymakers, planners, and communities to collaboratively forge resilient urban futures that withstand the challenges of an ever-evolving urban landscape.Keywords: resilient principles, risk management, sustainable cities, urban resilience
Procedia PDF Downloads 54533 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar
Authors: Gary Peach, Furqan Hameed
Abstract:
Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey
Procedia PDF Downloads 244532 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India
Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari
Abstract:
The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya
Procedia PDF Downloads 72531 Monitoring of Indoor Air Quality in Museums
Authors: Olympia Nisiforou
Abstract:
The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.Keywords: exibitions, indoor air quality , VOCs, pollution
Procedia PDF Downloads 123530 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 343529 Digital Twins in the Built Environment: A Systematic Literature Review
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.Keywords: built environment, design, digital twins, literature review
Procedia PDF Downloads 81528 Thermo-Mechanical Processing Scheme to Obtain Micro-Duplex Structure Favoring Superplasticity in an As-Cast and Homogenized Medium Alloyed Nickel Base Superalloy
Authors: K. Sahithya, I. Balasundar, Pritapant, T. Raghua
Abstract:
Ni-based superalloy with a nominal composition Ni-14% Cr-11% Co-5.8% Mo-2.4% Ti-2.4% Nb-2.8% Al-0.26 % Fe-0.032% Si-0.069% C (all in wt %) is used as turbine discs in a variety of aero engines. Like any other superalloy, the primary processing of the as-cast superalloy poses a major challenge due to its complex alloy chemistry. The challenge was circumvented by characterizing the different phases present in the material, optimizing the homogenization treatment, identifying a suitable thermomechanical processing window using dynamic materials modeling. The as-cast material was subjected to homogenization at 1200°C for a soaking period of 8 hours and quenched using different media. Water quenching (WQ) after homogenization resulted in very fine spherical γꞌ precipitates of sizes 30-50 nm, whereas furnace cooling (FC) after homogenization resulted in bimodal distribution of precipitates (primary gamma prime of size 300nm and secondary gamma prime of size 5-10 nm). MC type primary carbides that are stable till the melting point of the material were found in both WQ and FC samples. Deformation behaviour of both the materials below (1000-1100°C) and above gamma prime solvus (1100-1175°C) was evaluated by subjecting the material to series of compression tests at different constant true strain rates (0.0001/sec-1/sec). An in-detail examination of the precipitate dislocation interaction mechanisms carried out using TEM revealed precipitate shearing and Orowan looping as the mechanisms governing deformation in WQ and FC, respectively. Incoherent/semi coherent gamma prime precipitates in the case of FC material facilitates better workability of the material, whereas the coherent precipitates in WQ material contributed to higher resistance to deformation of the material. Both the materials exhibited discontinuous dynamic recrystallization (DDRX) above gamma prime solvus temperature. The recrystallization kinetics was slower in the case of WQ material. Very fine grain boundary carbides ( ≤ 300 nm) retarded the recrystallisation kinetics in WQ. Coarse carbides (1-5 µm) facilitate particle stimulated nucleation in FC material. The FC material was cogged (primary hot working) 1120˚C, 0.03/sec resulting in significant grain refinement, i.e., from 3000 μm to 100 μm. The primary processed material was subjected to intensive thermomechanical deformation subsequently by reducing the temperature by 50˚C in each processing step with intermittent heterogenization treatment at selected temperatures aimed at simultaneous coarsening of the gamma prime precipitates and refinement of the gamma matrix grains. The heterogeneous annealing treatment carried out, resulted in gamma grains of 10 μm and gamma prime precipitates of 1-2 μm. Further thermo mechanical processing of the material was carried out at 1025˚C to increase the homogeneity of the obtained micro-duplex structure.Keywords: superalloys, dynamic material modeling, nickel alloys, dynamic recrystallization, superplasticity
Procedia PDF Downloads 121527 Rotterdam in Transition: A Design Case for a Low-Carbon Transport Node in Lombardijen
Authors: Halina Veloso e Zarate, Manuela Triggianese
Abstract:
The urban challenges posed by rapid population growth, climate adaptation, and sustainable living have compelled Dutch cities to reimagine their built environment and transportation systems. As a pivotal contributor to CO₂ emissions, the transportation sector in the Netherlands demands innovative solutions for transitioning to low-carbon mobility. This study investigates the potential of transit oriented development (TOD) as a strategy for achieving carbon reduction and sustainable urban transformation. Focusing on the Lombardijen station area in Rotterdam, which is targeted for significant densification, this paper presents a design-oriented exploration of a low-carbon transport node. By employing a research-by-design methodology, this study delves into multifaceted factors and scales, aiming to propose future scenarios for Lombardijen. Drawing from a synthesis of existing literature, applied research, and practical insights, a robust design framework emerges. To inform this framework, governmental data concerning the built environment and material embodied carbon are harnessed. However, the restricted access to crucial datasets, such as property ownership information from the cadastre and embodied carbon data from De Nationale Milieudatabase, underscores the need for improved data accessibility, especially during the concept design phase. The findings of this research contribute fundamental insights not only to the Lombardijen case but also to TOD studies across Rotterdam's 13 nodes and similar global contexts. Spatial data related to property ownership facilitated the identification of potential densification sites, underscoring its importance for informed urban design decisions. Additionally, the paper highlights the disparity between the essential role of embodied carbon data in environmental assessments for building permits and its limited accessibility due to proprietary barriers. Although this study lays the groundwork for sustainable urbanization through TOD-based design, it acknowledges an area of future research worthy of exploration: the socio-economic dimension. Given the complex socio-economic challenges inherent in the Lombardijen area, extending beyond spatial constraints, a comprehensive approach demands integration of mobility infrastructure expansion, land-use diversification, programmatic enhancements, and climate adaptation. While the paper adopts a TOD lens, it refrains from an in-depth examination of issues concerning equity and inclusivity, opening doors for subsequent research to address these aspects crucial for holistic urban development.Keywords: Rotterdam zuid, transport oriented development, carbon emissions, low-carbon design, cross-scale design, data-supported design
Procedia PDF Downloads 84526 Liquid Illumination: Fabricating Images of Fashion and Architecture
Authors: Sue Hershberger Yoder, Jon Yoder
Abstract:
“The appearance does not hide the essence, it reveals it; it is the essence.”—Jean-Paul Sartre, Being and Nothingness Three decades ago, transarchitect Marcos Novak developed an early form of algorithmic animation he called “liquid architecture.” In that project, digitally floating forms morphed seamlessly in cyberspace without claiming to evolve or improve. Change itself was seen as inevitable. And although some imagistic moments certainly stood out, none was hierarchically privileged over another. That project challenged longstanding assumptions about creativity and artistic genius by posing infinite parametric possibilities as inviting alternatives to traditional notions of stability, originality, and evolution. Through ephemeral processes of printing, milling, and projecting, the exhibition “Liquid Illumination” destabilizes the solid foundations of fashion and architecture. The installation is neither worn nor built in the conventional sense, but—like the sensual art forms of fashion and architecture—it is still radically embodied through the logics and techniques of design. Appearances are everything. Surface pattern and color are no longer understood as minor afterthoughts or vapid carriers of dubious content. Here, they become essential but ever-changing aspects of precisely fabricated images. Fourteen silk “colorways” (a term from the fashion industry) are framed selections from ongoing experiments with intricate pattern and complex color configurations. Whether these images are printed on fabric, milled in foam, or illuminated through projection, they explore and celebrate the untapped potentials of the surficial and superficial. Some components of individual prints appear to float in front of others through stereoscopic superimpositions; some figures appear to melt into others due to subtle changes in hue without corresponding changes in value; and some layers appear to vibrate via moiré effects that emerge from unexpected pattern and color combinations. The liturgical atmosphere of Liquid Illumination is intended to acknowledge that, like the simultaneously sacred and superficial qualities of rose windows and illuminated manuscripts, artistic and religious ideologies are also always malleable. The intellectual provocation of this paper pushes the boundaries of current thinking concerning viable applications for fashion print designs and architectural images—challenging traditional boundaries between fine art and design. The opportunistic installation of digital printing, CNC milling, and video projection mapping in a gallery that is normally reserved for fine art exhibitions raises important questions about cultural/commercial display, mass customization, digital reproduction, and the increasing prominence of surface effects (color, texture, pattern, reflection, saturation, etc.) across a range of artistic practices and design disciplines.Keywords: fashion, print design, architecture, projection mapping, image, fabrication
Procedia PDF Downloads 88525 Interdependence of Vocational Skills and Employability Skills: Example of an Industrial Training Centre in Central India
Authors: Mahesh Vishwakarma, Sadhana Vishwakarma
Abstract:
Vocational education includes all kind of education which can help students to acquire skills related to a certain profession, art, or activity so that they are able to exercise that profession, art or activity after acquiring such qualification. However, in this global economy of the modern world, job seekers are expected to have certain soft skills over and above the technical knowledge and skills acquired in their areas of expertise. These soft skills include but not limited to interpersonal communication, understanding, personal attributes, problem-solving, working in team, quick adaptability to the workplace environment, and other. Not only the hands-on, job-related skills, and competencies are now being sought by the employers, but also a complex of attitudinal dispositions and affective traits are being looked by them in their prospective employees. This study was performed to identify the employability skills of technical students from an Industrial Training Centre (ITC) in central India. It also aimed to convey a message to the students currently on the role, that for them to remain relevant in the job market, they would need to constantly adapt to changes and evolving requirements in the work environment, including the use of updated technologies. Five hypotheses were formulated and tested on the employability skills of students as a function of gender, trade, work experience, personal attributes, and IT skills. Data were gathered with the help of center’s training officers who approached 200 recently graduated students from the center and administered the instrument to students. All 200 respondents returned the completed instrument. The instrument used for the study consisted of 2 sections; demographic details and employability skills. To measure the employability skills of the trainees, the instrument was developed by referring to the several instruments developed by the past researchers for similar studies. The 1st section of the instrument of demographic details recorded age, gender, trade, year of passing, interviews faced, and employment status of the respondents. The 2nd section of the instrument on employability skills was categorized into seven specific skills: basic vocational skills; personal attributes; imagination skills; optimal management of resources; information-technology skills; interpersonal skills; adapting to new technologies. The reliability and validity of the instrument were checked. The findings revealed valuable information on the relationship and interdependence of vocational education and employability skills of students in the central Indian scenario. The findings revealed a valuable information on supplementing the existing vocational education programs with few soft skills and competencies so as to develop a superior workforce much better equipped to face the job market. The findings of the study can be used as an example by the management of government and private industrial training centers operating in the other parts of the Asian region. Future research can be undertaken on a greater population base from different geographical regions and backgrounds for an enhanced outcome.Keywords: employability skills, vocational education, industrial training centers, students
Procedia PDF Downloads 132524 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 122523 Leveraging Remote Assessments and Central Raters to Optimize Data Quality in Rare Neurodevelopmental Disorders Clinical Trials
Authors: Pamela Ventola, Laurel Bales, Sara Florczyk
Abstract:
Background: Fully remote or hybrid administration of clinical outcome measures in rare neurodevelopmental disorders trials is increasing due to the ongoing pandemic and recognition that remote assessments reduce the burden on families. Many assessments in rare neurodevelopmental disorders trials are complex; however, remote/hybrid trials readily allow for the use of centralized raters to administer and score the scales. The use of centralized raters has many benefits, including reducing site burden; however, a specific impact on data quality has not yet been determined. Purpose: The current study has two aims: a) evaluate differences in data quality between administration of a standardized clinical interview completed by centralized raters compared to those completed by site raters and b) evaluate improvement in accuracy of scoring standardized developmental assessments when scored centrally compared to when scored by site raters. Methods: For aim 1, the Vineland-3, a widely used measure of adaptive functioning, was administered by site raters (n= 52) participating in one of four rare disease trials. The measure was also administered as part of two additional trials that utilized central raters (n=7). Each rater completed a comprehensive training program on the assessment. Following completion of the training, each clinician completed a Vineland-3 with a mock caregiver. Administrations were recorded and reviewed by a neuropsychologist for administration and scoring accuracy. Raters were able to certify for the trials after demonstrating an accurate administration of the scale. For site raters, 25% of each rater’s in-study administrations were reviewed by a neuropsychologist for accuracy of administration and scoring. For central raters, the first two administrations and every 10th administration were reviewed. Aim 2 evaluated the added benefit of centralized scoring on the accuracy of scoring of the Bayley-3, a comprehensive developmental assessment widely used in rare neurodevelopmental disorders trials. Bayley-3 administrations across four rare disease trials were centrally scored. For all administrations, the site rater who administered the Bayley-3 scored the scale, and a centralized rater reviewed the video recordings of the administrations and also scored the scales to confirm accuracy. Results: For aim 1, site raters completed 138 Vineland-3 administrations. Of the138 administrations, 53 administrations were reviewed by a neuropsychologist. Four of the administrations had errors that compromised the validity of the assessment. The central raters completed 180 Vineland-3 administrations, 38 administrations were reviewed, and none had significant errors. For aim 2, 68 administrations of the Bayley-3 were reviewed and scored by both a site rater and a centralized rater. Of these administrations, 25 had errors in scoring that were corrected by the central rater. Conclusion: In rare neurodevelopmental disorders trials, sample sizes are often small, so data quality is critical. The use of central raters inherently decreases site burden, but it also decreases rater variance, as illustrated by the small team of central raters (n=7) needed to conduct all of the assessments (n=180) in these trials compared to the number of site raters (n=53) required for even fewer assessments (n=138). In addition, the use of central raters dramatically improves the quality of scoring the assessments.Keywords: neurodevelopmental disorders, clinical trials, rare disease, central raters, remote trials, decentralized trials
Procedia PDF Downloads 172522 The Cost of Beauty: Insecurity and Profit
Authors: D. Cole, S. Mahootian, P. Medlock
Abstract:
This research contributes to existing knowledge of the complexities surrounding women’s relationship to beauty standards by examining their lived experiences. While there is much academic work on the effects of culturally imposed and largely unattainable beauty standards, the arguments tend to fall into two paradigms. On the one hand is the radical feminist perspective that argues that women are subjected to absolute oppression within the patriarchal system in which beauty standards have been constructed. This position advocates for a complete restructuring of social institutions to liberate women from all types of oppression. On the other hand, there are liberal feminist arguments that focus on choice, arguing that women’s agency in how to present themselves is empowerment. These arguments center around what women do within the patriarchal system in order to liberate themselves. However, there is very little research on the lived experiences of women negotiating these two realms: the complex negotiation between the pressure to adhere to cultural beauty standards and the agency of self-expression and empowerment. By exploring beauty standards through the intersection of societal messages (including macro-level processes such as social media and advertising as well as smaller-scale interactions such as families and peers) and lived experiences, this study seeks to provide a nuanced understanding of how women navigate and negotiate their own presentation and sense of self-identity. Current research sees a rise in incidents of body dysmorphia, depression and anxiety since the advent of social media. Approximately 91% of women are unhappy with their bodies and resort to dieting to achieve their ideal body shape, but only 5% of women naturally possess the body type often portrayed by Americans in movies and media. It is, therefore, crucial we begin talking about the processes that are affecting self-image and mental health. A question that arises is that, given these negative effects, why do companies continue to advertise and target women with standards that very few could possibly attain? One obvious answer is that keeping beauty standards largely unattainable enables the beauty and fashion industries to make large profits by promising products and procedures that will bring one up to “standard”. The creation of dissatisfaction for some is profit for others. This research utilizes qualitative methods: interviews, questionnaires, and focus groups to investigate women’s relationships to beauty standards and empowerment. To this end, we reached out to potential participants through a video campaign on social media: short clips on Instagram, Facebook, and TikTok and a longer clip on YouTube inviting users to take part in the study. Participants are asked to react to images, videos, and other beauty-related texts. The findings of this research have implications for policy development, advocacy and interventions aimed at promoting healthy inclusivity and empowerment of women.Keywords: women, beauty, consumerism, social media
Procedia PDF Downloads 61521 Influence of Disintegration of Sida hermaphrodita Silage on Methane Fermentation Efficiency
Authors: Marcin Zielinski, Marcin Debowski, Paulina Rusanowska, Magda Dudek
Abstract:
As a result of sonification, the destruction of complex biomass structures results in an increase in the biogas yield from the conditioned material. First, the amount of organic matter released into the solution due to disintegration was determined. This parameter was determined by changes in the carbon content in liquid phase of the conditioned substrate. The amount of carbon in the liquid phase increased with the prolongation of the sonication time to 16 min. Further increase in the duration of sonication did not cause a statistically significant increase in the amount of organic carbon in the liquid phase. The disintegrated material was then used for respirometric measurements for determination of the impact of the conditioning process used on methane fermentation effectiveness. The relationship between the amount of energy introduced into the lignocellulosic substrate and the amount of biogas produced has been demonstrated. Statistically significant increase in the amount of biogas was observed until sonication of 16 min. Further increase in energy in the conditioning process did not significantly increase the production of biogas from the treated substrate. The biogas production from the conditioned substrate was 17% higher than from the reference biomass at that time. The ultrasonic disintegration method did not significantly affect the observed biogas composition. In all series, the methane content in the produced biogas from the conditioned substrate was similar to that obtained with the raw substrate sample (51.1%). Another method of substrate conditioning was hydrothermal depolymerization. This method consists in application of increased temperature and pressure to substrate. These phenomena destroy the structure of the processed material, the release of organic compounds to the solution, which should lead to increase the amount of produced biogas from such treated biomass. The hydrothermal depolymerization was conducted using an innovative microwave heating method. Control measurements were performed using conventional heating. The obtained results indicate the relationship between depolymerization temperature and the amount of biogas. Statistically significant value of the biogas production coefficients increased as the depolymerization temperature increased to 150°C. Further raising the depolymerization temperature to 180°C did not significantly increase the amount of produced biogas in the respirometric tests. As a result of the hydrothermal depolymerization obtained using microwave at 150°C for 20 min, the rate of biogas production from the Sida silage was 780 L/kg VS, which accounted for nearly 50% increase compared to 370 L/kg VS obtained from the same silage but not depolymerised. The study showed that by microwave heating it is possible to effectively depolymerized substrate. Significant differences occurred especially in the temperature range of 130-150ºC. The pre-treatment of Sida hermaphrodita silage (biogas substrate) did not significantly affect the quality of the biogas produced. The methane concentration was about 51.5% on average. The study was carried out in the framework of the project under program BIOSTRATEG funded by the National Centre for Research and Development No. 1/270745/2/NCBR/2015 'Dietary, power, and economic potential of Sida hermaphrodita cultivation on fallow land'.Keywords: disintegration, biogas, methane fermentation, Virginia fanpetals, biomass
Procedia PDF Downloads 309520 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 61519 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency
Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader
Abstract:
Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.Keywords: immobilization, reusability, thermozymes, xylanase
Procedia PDF Downloads 374518 Seismotectonics and Seismology the North of Algeria
Authors: Djeddi Mabrouk
Abstract:
The slow coming together between the Afro-Eurasia plates seems to be the main cause of the active deformation in the whole of North Africa which in consequence come true in Algeria with a large zone of deformation in an enough large limited band, southern through Saharan atlas and northern through tell atlas. Maghrebin and Atlassian Chain along North Africa are the consequence of this convergence. In junction zone, we have noticed a compressive regime NW-SE with a creases-faults structure and structured overthrust. From a geological point of view the north part of Algeria is younger then Saharan platform, it’s changing so unstable and constantly in movement, it’s characterized by creases openly reversed, overthrusts and reversed faults, and undergo perpetually complex movement vertically and horizontally. On structural level the north of Algeria it's a part of erogenous alpine peri-Mediterranean and essentially the tertiary age It’s spread from east to the west of Algeria over 1200 km.This oogenesis is extended from east to west on broadband of 100 km.The alpine chain is shaped by 3 domains: tell atlas in north, high plateaus in mid and Saharan atlas in the south In extreme south we find the Saharan platform which is made of Precambrian bedrock recovered by Paleozoic practically not deformed. The Algerian north and the Saharan platform are separated by an important accident along of 2000km from Agadir (Morocco) to Gabes (Tunisian). The seismic activity is localized essentially in a coastal band in the north of Algeria shaped by tell atlas, high plateaus, Saharan atlas. Earthquakes are limited in the first 20km of the earth's crust; they are caused by movements along faults of inverted orientation NE-SW or sliding tectonic plates. The center region characterizes Strong Earthquake Activity who locates mainly in the basin of Mitidja (age Neogene).The southern periphery (Atlas Blidéen) constitutes the June, more Important seism genic sources in the city of Algiers and east (Boumerdes region). The North East Region is also part of the tellian area, but it is characterized by a different strain in other parts of northern Algeria. The deformation is slow and low to moderate seismic activity. Seismic activity is related to the tectonic-slip earthquake. The most pronounced is that of 27 October 1985 (Constantine) of seismic moment magnitude Mw = 5.9. North-West region is quite active and also artificial seismic hypocenters which do not exceed 20km. The deep seismicity is concentrated mainly a narrow strip along the edge of Quaternary and Neogene basins Intra Mountains along the coast. The most violent earthquakes in this region are the earthquake of Oran in 1790 and earthquakes Orléansville (El Asnam in 1954 and 1980).Keywords: alpine chain, seismicity north Algeria, earthquakes in Algeria, geophysics, Earth
Procedia PDF Downloads 407