Search results for: distribution patterns
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7540

Search results for: distribution patterns

1030 Gender Stereotypes in the Media Content as an Obstacle for Elimination of Discrimination against Women in the Republic of Serbia

Authors: Mirjana Dokmanovic

Abstract:

The main topic of this paper is the analysis of the presence of gender stereotypes in the media content in the Republic of Serbia with respect to the state commitments to eliminate discrimination against women. The research methodology included the analysis of the media content of six daily newspapers and two magazines on the date of 28 December 2015 and the analysis of the reality TV show programs in 2015 from gender perspective. The methods of the research has also included a desk research and a qualitative analysis of the available data, statistics, policy papers, studies, and reports produced by the government, the Ministry of Culture and Information, the Regulatory Body for Electronic Media, the Press Council, the associations of media professionals, the independent human rights bodies, and civil society organizations (CSOs). As a State Signatory to the Convention on the Elimination of All Forms of Discrimination against Women, the Republic of Serbia has adopted numerous measures in this field, including the Law on Equality between Sexes and the national gender equality strategies. Special attention has been paid to eliminating gender stereotypes and prejudices in the media content and portraying of women. This practice has been forbidden by the Law on Electronic Media, the Law on Public Information and Media, the Law on Public Service Broadcasting and the Bylaw on the Protection of Human Rights in the Provision of Media Services. Despite these commitments, there has not been achieved progress regarding eliminating gender stereotypes in the media content. The research indicates that the media perpetuate traditional gender roles and patriarchal patterns. Female politicians, entrepreneurs, academics, scientists, and engineers have been very rarely portrayed in the media. On the other side, women are in their focus as celebrities, singers, and actresses. Women are underrepresented in the pages related to politics and economy, while they are mostly present in the cover stories related to show-business, health care, family and household matters. Women are three times more than men identified on the basis of their family status, as mothers, wives, daughters, etc. Hate speech, misogyny, and violence against women are often present in the reality TV shows. The abuse of women and their bodies in advertising is still widely present. The cases of domestic violence are still presented with sensationalism, although there has been achieved progress in portraying victims of domestic violence with respect and dignity. The issues related to gender equality and the position of the vulnerable groups of women, such as Roma women or rural women, are not visible in the media. This research, as well as warnings of women’s CSOs and independent human rights bodies, indicates the necessity to implement legal and policy measures in this field consistently and with due diligence. The aim of the paper is to contribute eliminating gender stereotypes in the media content and advancing gender equality.

Keywords: discrimination against women, gender roles, gender stereotypes, media, misogyny, portraying women in the media, prejudices against women, Republic of Serbia

Procedia PDF Downloads 196
1029 Screens Design and Application for Sustainable Buildings

Authors: Fida Isam Abdulhafiz

Abstract:

Traditional vernacular architecture in the United Arab Emirates constituted namely of adobe houses with a limited number of openings in their facades. The thick mud and rubble walls and wooden window screens protected its inhabitants from the harsh desert climate and provided them with privacy and fulfilled their comfort zone needs to an extent. However, with the rise of the immediate post petroleum era reinforced concrete villas with glass and steel technology has replaced traditional vernacular dwellings. And more load was put on the mechanical cooling systems to ensure the satisfaction of today’s more demanding doweling inhabitants. However, In the early 21at century professionals started to pay more attention to the carbon footprint caused by the built constructions. In addition, many studies and innovative approaches are now dedicated to lower the impact of the existing operating buildings on their surrounding environments. The UAE government agencies started to regulate that aim to revive sustainable and environmental design through Local and international building codes and urban design policies such as Estidama and LEED. The focus in this paper is on the reduction of the emissions resulting from the use of energy sources in the cooling and heating systems, and that would be through using innovative screen designs and façade solutions to provide a green footprint and aesthetic architectural icons. Screens are one of the popular innovative techniques that can be added in the design process or used in existing building as a renovation techniques to develop a passive green buildings. Preparing future architects to understand the importance of environmental design was attempted through physical modelling of window screens as an educational means to combine theory with a hands on teaching approach. Designing screens proved to be a popular technique that helped them understand the importance of sustainable design and passive cooling. After creating models of prototype screens, several tests were conducted to calculate the amount of Sun, light and wind that goes through the screens affecting the heat load and light entering the building. Theory further explored concepts of green buildings and material that produce low carbon emissions. This paper highlights the importance of hands on experience for student architects and how physical modelling helped rise eco awareness in Design studio. The paper will study different types of façade screens and shading devices developed by Architecture students and explains the production of diverse patterns for traditional screens by student architects based on sustainable design concept that works properly with the climate requirements in the Middle East region.

Keywords: building’s screens modeling, façade design, sustainable architecture, sustainable dwellings, sustainable education

Procedia PDF Downloads 288
1028 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 13
1027 Predicting Child Attachment Style Based on Positive and Safe Parenting Components and Mediating Maternal Attachment Style in Children With ADHD

Authors: Alireza Monzavi Chaleshtari, Maryam Aliakbari

Abstract:

Objective: The aim of this study was to investigate the prediction of child attachment style based on a positive and safe combination parenting method mediated by maternal attachment styles in children with attention deficit hyperactivity disorder. Method: The design of the present study was descriptive of correlation and structural equations and applied in terms of purpose. The population of this study includes all children with attention deficit hyperactivity disorder living in Chaharmahal and Bakhtiari province and their mothers. The sample size of the above study includes 165children with attention deficit hyperactivity disorder in Chaharmahal and Bakhtiari province with their mothers, who were selected by purposive sampling method based on the inclusion criteria. The obtained data were analyzed in two sections of descriptive and inferential statistics. In the descriptive statistics section, statistical indices of mean, standard deviation, frequency distribution table and graph were used. In the inferential section, according to the nature of the hypotheses and objectives of the research, the data were analyzed using Pearson correlation coefficient tests, Bootstrap test and structural equation model. findings:The results of structural equation modeling showed that the research models fit and showed a positive and safe combination parenting style mediated by the mother attachment style has an indirect effect on the child attachment style. Also, a positive and safe combined parenting style has a direct relationship with child attachment style, and She has a mother attachment style. Conclusion:The results and findings of the present study show that there is a significant relationship between positive and safe combination parenting methods and attachment styles of children with attention deficit hyperactivity disorder with maternal attachment style mediation. Therefore, it can be expected that parents using a positive and safe combination232 parenting method can effectively lead to secure attachment in children with attention deficit hyperactivity disorder.

Keywords: child attachment style, positive and safe parenting, maternal attachment style, ADHD

Procedia PDF Downloads 58
1026 The Effect of Foot Progression Angle on Human Lower Extremity

Authors: Sungpil Ha, Ju Yong Kang, Sangbaek Park, Seung-Ju Lee, Soo-Won Chae

Abstract:

The growing number of obese patients in aging societies has led to an increase in the number of patients with knee medial osteoarthritis (OA). Artificial joint insertion is the most common treatment for knee medial OA. Surgery is effective for patients with serious arthritic symptoms, but it is costly and dangerous. It is also inappropriate way to prevent a disease as an early stage. Therefore Non-operative treatments such as toe-in gait are proposed recently. Toe-in gait is one of non-surgical interventions, which restrain the progression of arthritis and relieves pain by reducing knee adduction moment (KAM) to facilitate lateral distribution of load on to knee medial cartilage. Numerous studies have measured KAM in various foot progression angle (FPA), and KAM data could be obtained by motion analysis. However, variations in stress at knee cartilage could not be directly observed or evaluated by these experiments of measuring KAM. Therefore, this study applied motion analysis to major gait points (1st peak, mid –stance, 2nd peak) with regard to FPA, and to evaluate the effects of FPA on the human lower extremity, the finite element (FE) method was employed. Three types of gait analysis (toe-in, toe-out, baseline gait) were performed with markers placed at the lower extremity. Ground reaction forces (GRF) were obtained by the force plates. The forces associated with the major muscles were computed using GRF and marker trajectory data. MRI data provided by the Visible Human Project were used to develop a human lower extremity FE model. FE analyses for three types of gait simulations were performed based on the calculated muscle force and GRF. We observed the maximum stress point during toe-in gait was lower than the other types, by comparing the results of FE analyses at the 1st peak across gait types. This is the same as the trend exhibited by KAM, measured through motion analysis in other papers. This indicates that the progression of knee medial OA could be suppressed by adopting toe-in gait. This study integrated motion analysis with FE analysis. One advantage of this method is that re-modeling is not required even with changes in posture. Therefore another type of gait simulation or various motions of lower extremity can be easily analyzed using this method.

Keywords: finite element analysis, gait analysis, human model, motion capture

Procedia PDF Downloads 327
1025 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 246
1024 Prioritizing Biodiversity Conservation Areas based on the Vulnerability and the Irreplaceability Framework in Mexico

Authors: Alma Mendoza-Ponce, Rogelio Corona-Núñez, Florian Kraxner

Abstract:

Mexico is a megadiverse country and it has nearly halved its natural vegetation in the last century due to agricultural and livestock expansion. Impacts of land use cover change and climate change are unevenly distributed and spatial prioritization to minimize the affectations on biodiversity is crucial. Global and national efforts for prioritizing biodiversity conservation show that ~33% to 45% of Mexico should be protected. The width of these targets makes difficult to lead resources. We use a framework based on vulnerability and irreplaceability to prioritize conservation efforts in Mexico. Vulnerability considered exposure, sensitivity and adaptive capacity under two scenarios (business as usual, BAU based, on the SSP2 and RCP 4.5 and a Green scenario, based on the SSP1 and the RCP 2.6). Exposure to land use is the magnitude of change from natural vegetation to anthropogenic covers while exposure to climate change is the difference between current and future values for both scenarios. Sensitivity was considered as the number of endemic species of terrestrial vertebrates which are critically endangered and endangered. Adaptive capacity is used as the ration between the percentage of converted area (natural to anthropogenic) and the percentage of protected area at municipality level. The results suggest that by 2050, between 11.6 and 13.9% of Mexico show vulnerability ≥ 50%, and by 2070, between 12.0 and 14.8%, in the Green and BAU scenario, respectively. From an ecosystem perspective cloud forests, followed by tropical dry forests, natural grasslands and temperate forests will be the most vulnerable (≥ 50%). Amphibians are the most threatened vertebrates; 62% of the endemic amphibians are critically endangered or endangered while 39%, 12% and 9% of the mammals, birds, and reptiles, respectively. However, the distribution of these amphibians counts for only 3.3% of the country, while mammals, birds, and reptiles in these categories represent 10%, 16% and 29% of Mexico. There are 5 municipalities out of the 2,457 that Mexico has that represent 31% of the most vulnerable areas (70%).These municipalities account for 0.05% of Mexico. This multiscale approach can be used to address resources to conservation targets as ecosystems, municipalities or species considering land use cover change, climate change and biodiversity uniqueness.

Keywords: biodiversity, climate change, land use change, Mexico, vulnerability

Procedia PDF Downloads 161
1023 The Effectiveness of Prefabricated Vertical Drains for Accelerating Consolidation of Tunis Soft Soil

Authors: Marwa Ben Khalifa, Zeineb Ben Salem, Wissem Frikha

Abstract:

The purpose of the present work is to study the consolidation behavior of highly compressible Tunis soft soil “TSS” by means of prefabricated vertical drains (PVD’s) associated to preloading based on laboratory and field investigations. In the first hand, the field performance of PVD’s on the layer of Tunis soft soil was analysed based on the case study of the construction of embankments of “Radès la Goulette” bridge project. PVD’s Geosynthetics drains types were installed with triangular grid pattern until 10 m depth associated with step-by-step surcharge. The monitoring of the soil settlement during preloading stage for Radès La Goulette Bridge project was provided by an instrumentation composed by various type of tassometer installed in the soil. The distribution of water pressure was monitored through piezocone penetration. In the second hand, a laboratory reduced tests are performed on TSS subjected also to preloading and improved with PVD's Mebradrain 88 (Mb88) type. A specific test apparatus was designed and manufactured to study the consolidation. Two series of consolidation tests were performed on TSS specimens. The first series included consolidation tests for soil improved by one central drain. In thesecond series, a triangular mesh of three geodrains was used. The evolution of degree of consolidation and measured settlements versus time derived from laboratory tests and field data were presented and discussed. The obtained results have shown that PVD’s have considerably accelerated the consolidation of Tunis soft soil by shortening the drainage path. The model with mesh of three drains gives results more comparative to field one. A longer consolidation time is observed for the cell improved by a single central drain. A comparison with theoretical analysis, basically that of Barron (1948) and Carillo (1942), was presented. It’s found that these theories overestimate the degree of consolidation in the presence of PVD.

Keywords: tunis soft soil, prefabricated vertical drains, acceleration of consolidation, dissipation of excess pore water pressures, radès bridge project, barron and carillo’s theories

Procedia PDF Downloads 118
1022 Development and Characterization of Expandable TPEs Compounds for Footwear Applications

Authors: Ana Elisa Ribeiro Costa, Sónia Daniela Ferreira Miranda, João Pedro De Carvalho Pereira, João Carlos Simões Bernardo

Abstract:

Elastomeric thermoplastics (TPEs) have been widely used in the footwear industry over the years. Recently this industry has been requesting materials that can combine lightweight and high abrasion resistance. Although there are blowing agents on the market to improve the lightweight, when these are incorporated into molten polymers during the extrusion or injection molding, it is necessary to have some specific processing conditions (e.g. effect of temperature and hydrodynamic stresses) to obtain good properties and acceptable surface appearance on the final products. Therefore, it is a great advantage for the compounder industry to acquire compounds that already include the blowing agents. In this way, they can be handled and processed under the same conditions as a conventional raw material. In this work, the expandable TPEs compounds, namely a TPU and a SEBS, with the incorporation of blowing agents, have been developed through a co-rotating modular twin-screw parallel extruder. Different blowing agents such as thermo-expandable microspheres and an azodicarbonamide were selected and different screw configurations and temperature profiles were evaluated since these parameters have a particular influence on the expansion inhibition of the blowing agents. Furthermore, percentages of incorporation were varied in order to investigate their influence on the final product properties. After the extrusion of these compounds, expansion was tested by the injection process. The mechanical and physical properties were characterized by different analytical methods like tensile, flexural and abrasive tests, determination of hardness and density measurement. Also, scanning electron microscopy (SEM) was performed. It was observed that it is possible to incorporate the blowing agents on the TPEs without their expansion on the extrusion process. Only with reprocessing (injection molding) did the expansion of the agents occur. These results are corroborated by SEM micrographs, which show a good distribution of blowing agents in the polymeric matrices. The other experimental results showed a good mechanical performance and its density decrease (30% for SEBS and 35% for TPU). This study suggested that it is possible to develop optimized compounds for footwear applications (e.g., sole shoes), which only will be able to expand during the injection process.

Keywords: blowing agents, expandable thermoplastic elastomeric compounds, low density, footwear applications

Procedia PDF Downloads 197
1021 Solids and Nutrient Loads Exported by Preserved and Impacted Low-Order Streams: A Comparison among Water Bodies in Different Latitudes in Brazil

Authors: Nicolas R. Finkler, Wesley A. Saltarelli, Taison A. Bortolin, Vania E. Schneider, Davi G. F. Cunha

Abstract:

Estimating the relative contribution of nonpoint or point sources of pollution in low-orders streams is an important tool for the water resources management. The location of headwaters in areas with anthropogenic impacts from urbanization and agriculture is a common scenario in developing countries. This condition can lead to conflicts among different water users and compromise ecosystem services. Water pollution also contributes to exporting organic loads to downstream areas, including higher order rivers. The purpose of this research is to preliminarily assess nutrients and solids loads exported by water bodies located in watersheds with different types of land uses in São Carlos - SP (Latitude. -22.0087; Longitude. -47.8909) and Caxias do Sul - RS (Latitude. -29.1634, Longitude. -51.1796), Brazil, using regression analysis. The variables analyzed in this study were Total Kjeldahl Nitrogen (TKN), Nitrate (NO3-), Total Phosphorus (TP) and Total Suspended Solids (TSS). Data were obtained in October and December 2015 for São Carlos (SC) and in November 2012 and March 2013 for Caxias do Sul (CXS). Such periods had similar weather patterns regarding precipitation and temperature. Altogether, 11 sites were divided into two groups, some classified as more pristine (SC1, SC4, SC5, SC6 and CXS2), with predominance of native forest; and others considered as impacted (SC2, SC3, CXS1, CXS3, CXS4 and CXS5), presenting larger urban and/or agricultural areas. Previous linear regression was applied for data on flow and drainage area of each site (R² = 0.9741), suggesting that the loads to be assessed had a significant relationship with the drainage areas. Thereafter, regression analysis was conducted between the drainage areas and the total loads for the two land use groups. The R² values were 0.070, 0.830, 0.752 e 0.455 respectively for SST, TKN, NO3- and TP loads in the more preserved areas, suggesting that the loads generated by runoff are significant in these locations. However, the respective R² values for sites located in impacted areas were respectively 0.488, 0.054, 0.519 e 0.059 for SST, TKN, NO3- and P loads, indicating a less important relationship between total loads and runoff as compared to the previous scenario. This study suggests three possible conclusions that will be further explored in the full-text article, with more sampling sites and periods: a) In preserved areas, nonpoint sources of pollution are more significant in determining water quality in relation to the studied variables; b) The nutrient (TKN and P) loads in impacted areas may be associated with point sources such as domestic wastewater discharges with inadequate treatment levels; and c) The presence of NO3- in impacted areas can be associated to the runoff, particularly in agricultural areas, where the application of fertilizers is common at certain times of the year.

Keywords: land use, linear regression, point and non-point pollution sources, streams, water resources management

Procedia PDF Downloads 302
1020 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life

Authors: Sandra Young

Abstract:

The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.

Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics

Procedia PDF Downloads 130
1019 Autonomic Nervous System and CTRA Gene Expression among Healthy Young Adults in Japan

Authors: Yoshino Murakami, Takeshi Hashimoto, Steve Cole

Abstract:

The autonomic nervous system (ANS), particularly the sympathetic (SNS) and parasympathetic (PNS) branches, plays a vital role in modulating immune function and physiological homeostasis. In recent years, the Conserved Transcriptional Response to Adversity (CTRA) has emerged as a key marker of the body's response to chronic stress. This gene expression profile is characterized by SNS-mediated upregulation of pro-inflammatory genes (such as IL1B and TNF) and downregulation of antiviral response genes (e.g., IFI and MX families). CTRA has been observed in individuals exposed to prolonged stressors like loneliness, social isolation, and bereavement. Some research suggests that PNS activity, as indicated by heart rate variability (HRV), may help counteract the CTRA. However, previous PNS-CTRA studies have focused on Western populations, raising questions about the generalizability of these findings across different cultural and ethnic backgrounds. This study aimed to examine the relationship between HRV and CTRA gene expression in young, healthy adults in Japan. We hypothesized that HRV would be inversely related to CTRA gene expression, similar to patterns observed in previous Western studies. A total of 49 participants aged 20 to 39 were recruited, and after data exclusions, 26 participants' HRV and CTRA data were analyzed. HRV was measured using an electrocardiogram (ECG), and two time-domain indices were utilized: the root mean square of successive differences (RMSSD) and the standard deviation of NN intervals (SDNN). Blood samples were collected for gene expression analysis, focusing on a standard set of 47 CTRA indicator gene transcripts. it findings revealed a significant inverse relationship between HRV and CTRA gene expression, with higher HRV correlating with reduced pro-inflammatory gene activity and increased antiviral response. These results are consistent with findings from Western populations and demonstrate that the relationship between ANS function and immune response generalizes to an East Asian population. The study highlights the importance of HRV as a biomarker for psychophysiological health, reflecting the body's ability to buffer stress and maintain immune balance. These findings have implications for understanding how physiological systems interact across different cultures and ethnicities. Given the influence of chronic stress in promoting inflammation and disease risk, interventions aimed at improving HRV, such as mindfulness-based practices or physical exercise, could provide significant health benefits. Future research should focus on larger sample sizes and experimental interventions to better understand the causal pathways linking HRV to CTRA gene expression, and determine whether improving HRV may help mitigate the harmful effects of stress on health by reducing inflammation.

Keywords: autonomic nervous activity, neuroendocrine system, inflammation, Japan

Procedia PDF Downloads 7
1018 Central Energy Management for Optimizing Utility Grid Power Exchange with a Network of Smart Homes

Authors: Sima Aznavi, Poria Fajri, Hanif Livani

Abstract:

Smart homes are small energy systems which may be equipped with renewable energy sources, storage devices, and loads. Energy management strategy plays a main role in the efficient operation of smart homes. Effective energy scheduling of the renewable energy sources and storage devices guarantees efficient energy management in households while reducing the energy imports from the grid. Nevertheless, despite such strategies, independently day ahead energy schedules for multiple households can cause undesired effects such as high power exchange with the grid at certain times of the day. Therefore, the interactions between multiple smart home day ahead energy projections is a challenging issue in a smart grid system and if not managed appropriately, the imported energy from the power network can impose additional burden on the distribution grid. In this paper, a central energy management strategy for a network consisting of multiple households each equipped with renewable energy sources, storage devices, and Plug-in Electric Vehicles (PEV) is proposed. The decision-making strategy alongside the smart home energy management system, minimizes the energy purchase cost of the end users, while at the same time reducing the stress on the utility grid. In this approach, the smart home energy management system determines different operating scenarios based on the forecasted household daily load and the components connected to the household with the objective of minimizing the end user overall cost. Then, selected projections for each household that are within the same cost range are sent to the central decision-making system. The central controller then organizes the schedules to reduce the overall peak to average ratio of the total imported energy from the grid. To validate this approach simulations are carried out for a network of five smart homes with different load requirements and the results confirm that by applying the proposed central energy management strategy, the overall power demand from the grid can be significantly flattened. This is an effective approach to alleviate the stress on the network by distributing its energy to a network of multiple households over a 24- hour period.

Keywords: energy management, renewable energy sources, smart grid, smart home

Procedia PDF Downloads 240
1017 Computational Approach to Cyclin-Dependent Kinase 2 Inhibitors Design and Analysis: Merging Quantitative Structure-Activity Relationship, Absorption, Distribution, Metabolism, Excretion, and Toxicity, Molecular Docking, and Molecular Dynamics Simulations

Authors: Mohamed Moussaoui, Mouna Baassi, Soukayna Baammi, Hatim Soufi, Mohammed Salah, Rachid Daoud, Achraf EL Allali, Mohammed Elalaoui Belghiti, Said Belaaouad

Abstract:

The present study aims to investigate the quantitative structure-activity relationship (QSAR) of a series of Thiazole derivatives reported as anticancer agents (hepatocellular carcinoma), using principally the electronic descriptors calculated by the density functional theory (DFT) method and by applying the multiple linear regression method. The developed model showed good statistical parameters (R²= 0.725, R²ₐ𝒹ⱼ= 0.653, MSE = 0.060, R²ₜₑₛₜ= 0.827, Q²𝒸ᵥ = 0.536). The energy of the highest occupied molecular orbital (EHOMO) orbital, electronic energy (TE), shape coefficient (I), number of rotatable bonds (NROT), and index of refraction (n) were revealed to be the main descriptors influencing the anti-cancer activity. Additional Thiazole derivatives were then designed and their activities and pharmacokinetic properties were predicted using the validated QSAR model. These designed molecules underwent evaluation through molecular docking (MD) and molecular dynamic (MD) simulations, with binding affinity calculated using the MMPBSA script according to a 100 ns simulation trajectory. This process aimed to study both their affinity and stability towards Cyclin-Dependent Kinase 2 (CDK2), a target protein for cancer disease treatment. The research concluded by identifying four CDK2 inhibitors - A1, A3, A5, and A6 - displaying satisfactory pharmacokinetic properties. MDs results indicated that the designed compound A5 remained stable in the active center of the CDK2 protein, suggesting its potential as an effective inhibitor for the treatment of hepatocellular carcinoma. The findings of this study could contribute significantly to the development of effective CDK2 inhibitors.

Keywords: QSAR, ADMET, Thiazole, anticancer, molecular docking, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 97
1016 Spatiotemporal Evaluation of Climate Bulk Materials Production in Atmospheric Aerosol Loading

Authors: Mehri Sadat Alavinasab Ashgezari, Gholam Reza Nabi Bidhendi, Fatemeh Sadat Alavinasab Ashkezari

Abstract:

Atmospheric aerosol loading (AAL) from anthropogenic sources is an evidence in industrial development. The accelerated trends in material consumption at the global scale in recent years demonstrate consumption paradigms sensible to the planetary boundaries (PB). This paper is a statistical approach on recognizing the path of climate-relevant bulk materials production (CBMP) of steel, cement and plastics to AAL via an updated and validated spatiotemporal distribution. The methodology of statistical analysis used the most updated regional or global databases or instrumental technologies. This corresponded to a selection of processes and areas capable for tracking AAL within the last decade, analyzing the most validated data while leading to explore the behavior functions or models. The results also represented a correlation within socio economic metabolism idea between the materials specified as macronutrients of society and AAL as a PB with an unknown threshold. The selected country contributors of China, India, US and the sample country of Iran show comparable cumulative AAL values vs to the bulk materials domestic extraction and production rate in the study period of 2012 to 2022. Generally, there is a tendency towards gradual descend in the worldwide and regional aerosol concentration after 2015. As of our evaluation, a considerable share of human role, equivalent 20% from CBMP, is for the main anthropogenic species of aerosols, including sulfate, black carbon and organic particulate matters too. This study, in an innovative approach, also explores the potential role of AAL control mechanisms from the economy sectors where ordered and smoothing loading trends are accredited through the disordered phenomena of CBMP and aerosol precursor emissions. The equilibrium states envisioned is an approval to the well-established theory of Spin Glasses applicable in physical system like the Earth and here to AAL.

Keywords: atmospheric aeroso loading, material flows, climate bulk materials, industrial ecology

Procedia PDF Downloads 72
1015 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India

Authors: Zafar Tabrez, Nizamuddin Khan

Abstract:

Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.

Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern

Procedia PDF Downloads 307
1014 Screening of Phytochemicals Compounds from Chasmanthera dependens and Carissa edulis as Potential Inhibitors of Carbonic Anhydrases CA II (3HS4) Receptor using a Target-Based Drug Design

Authors: Owonikoko Abayomi Dele

Abstract:

Epilepsy is an unresolved disease that needs urgent attention. It is a brain disorder that affects over sixty-five (65) million people around the globe. Despite the availability of commercial anti-epileptic drugs, the war against this unmet condition is yet to be resolved. Most epilepsy patients are resistant to available anti-epileptic medications thus the need for affordable novel therapy against epilepsy is a necessity. Numerous phytochemicals have been reported for their potency, efficacy and safety as therapeutic agents against many diseases. This study investigated 99 isolated phytochemicals from Chasmanthera dependens and Carissa edulis against carbonic anhydrase (ii) drug target. The absorption, distribution, metabolism, excretion and toxicity (ADMET) of the isolated compounds were examined using admet SAR-2 web server while Swiss ADME was used to analyze the oral bioavailability, drug-likeness and lead-likeness properties of the selected leads. PASS web server was used to predict the biological activities of selected leads while other important physicochemical properties and interactions of the selected leads with the active site of the target after successful molecular docking simulation with the pyrx virtual screening tool were also examined. The results of these study identified seven lead compounds; C49- alpha-carissanol (-7.6 kcal/mol), C13- Catechin (-7.4 kcal/mol), C45- Salicin (-7.4 kcal/mol), C6- Bisnorargemonine (-7.3 kcal/mol), C36- Pallidine (-7.1 kcal/mol), S4- Lacosamide (-7.1 kcal/mol), and S7- Acetazolamide (-6.4 kcal/mol) for CA II (3HS4 receptor). These leads compounds are probable inhibitors of this drug target due to the observed good binding affinities and favourable interactions with the active site of the drug target, excellent ADMET profiles, PASS Properties, drug-likeness, lead-likeness and oral bioavailability properties. The identified leads have better binding energies as compared to the binding energies of the two standards. Thus, seven identified lead compounds can be developed further towards the development of new anti-epileptic medications.

Keywords: drug-likeness, phytochemicals, carbonic anhydrases, metalloeazymes, active site, ADMET

Procedia PDF Downloads 42
1013 Challenging the Standard 24 Equal Quarter Tones Theory in Arab Music: A Case Study of Tetrachords Bayyātī and ḤIjāz

Authors: Nabil Shair

Abstract:

Arab music maqām (Arab modal framework) is founded, among other main characteristics, on microtonal intervals. Notwithstanding the importance and multifaceted nature of intonation in Arab music, there is a paucity of studies examining this subject based on scientific and quantitative approaches. The present-day theory concerning the Arab tone system is largely based on the pioneering treatise of Mīkhā’īl Mashāqah (1840), which proposes the theoretical division of the octave into 24 equal quarter tones. This kind of equal-tempered division is incompatible with the performance practice of Arab music, as many professional Arab musicians conceptualize additional pitches beyond the standard 24 notes per octave. In this paper, we refute the standard theory presenting the scale of well-tempered quarter tones by implementing a quantitative analysis of the performed intonation of two prominent tetrachords in Arab music, namely bayyātī and ḥijāz. This analysis was conducted with the help of advanced computer programs, such as Sonic Visualiser and Tony, by which we were able to obtain precise frequency data (Hz) of each tone every 0.01 second. As a result, the value (in cents) of all three intervals of each tetrachord was measured and accordingly compared to the theoretical intervals. As a result, a specific distribution of a range of deviation from the equal-tempered division of the octave was detected, especially the detection of a diminished first interval of bayyātí and diminished second interval of ḥijāz. These types of intonation entail a considerable amount of flexibility, mainly influenced by surrounding tones, direction and function of the measured tone, ornaments, text, personal style of the performer and interaction with the audience. This paper seeks to contribute to the existing literature dealing with intonation in Arab music, as it is a vital part of the performance practice of this musical tradition. In addition, the insights offered by this paper and its novel methodology might also contribute to the broadening of the existing pedagogic methods used to teach Arab music.

Keywords: Arab music, intonation, performance practice, music theory, oral music, octave division, tetrachords, music of the middle east, music history, musical intervals

Procedia PDF Downloads 47
1012 Numerical Study Pile Installation Disturbance Zone Effects on Excess Pore Pressure Dissipation

Authors: Kang Liu, Meng Liu, Meng-Long Wu, Da-Chang Yue, Hong-Yi Pan

Abstract:

The soil setup is an important factor affecting pile bearing capacity; there are many factors that influence it, all of which are closely related to pile construction disturbances. During pile installation in soil, a significant amount of excess pore pressure is generated, creating disturbance zones around the pile. The dissipation rate of excess pore pressure is an important factor influencing the pile setup. The paper aims to examine how alterations in parameters within disturbance zones affect the dissipation of excess pore pressure. An axisymmetric FE model is used to simulate pile installation in clay, subsequently consolidation using Plaxis 3D. The influence of disturbed zone on setup is verified, by comparing the parametric studies in uniform field and non-uniform field. Three types of consolidation are employed: consolidation in three directions, vertical consolidation, horizontal consolidation. The results of the parametric study show that the permeability coefficient decreases, soil stiffness decreases, and reference pressure increases in the disturbance zone, resulting in an increase in the dissipation time of excess pore pressure and exhibiting a noticeable threshold phenomenon, which has been commonly overlooked in previous literature. The research in this paper suggests that significant thresholds occur when the coefficient of permeability decreases to half of the original site's value for three-directional and horizontal consolidation within the disturbed zone. Similarly, the threshold for vertical consolidation is observed when the coefficient of permeability decreases to one-fourth of the original site's value. Especially in pile setup research, consolidation is assumed to be horizontal; the study findings suggest that horizontal consolidation has experienced notable alterations as a result of the presence of disturbed zones. Furthermore, the selection of pile installation methods proves to be critical. A nonlinearity excess pore pressure formula is proposed based on cavity expansion theory, which includes the distribution of soil profile modulus with depth.

Keywords: pile setup, threshold value effect, installation effects, uniform field, non-uniform field

Procedia PDF Downloads 40
1011 Influence of Bottom Ash on the Geotechnical Parameters of Clayey Soil

Authors: Tanios Saliba, Jad Wakim, Elie Awwad

Abstract:

Clayey soils exhibit undesirable problems in civil engineering project: poor bearing soil capacity, shrinkage, cracking, …etc. On the other hand, the increasing production of bottom ash and its disposal in an eco-friendly manner is a matter of concern. Soil stabilization using bottom ash is a new technic in the geo-environmental engineering. It can be used wherever a soft clayey soil is encountered in foundations or road subgrade, instead of using old technics such as cement-soil mixing. This new technology can be used for road embankments and clayey foundations platform (shallow or deep foundations) instead of replacing bad soil or using old technics which aren’t eco-friendly. Moreover, applying this new technic in our geotechnical engineering projects can reduce the disposal of the bottom ash problem which is getting bigger day after day. The research consists of mixing clayey soil with different percentages of bottom ash at different values of water content, and evaluates the mechanical properties of every mix: the percentages of bottom ash are 10% 20% 30% 40% and 50% with values of water content of 25% 35% and 45% of the mix’s weight. Before testing the different mixes, clayey soil’s properties were determined: Atterbeg limits, soil’s cohesion and friction angle and particle size distribution. In order to evaluate the mechanical properties and behavior of every mix, different tests are conducted: -Direct shear test in order to determine the cohesion and internal friction angle of every mix. -Unconfined compressive strength (stress strain curve) to determine mix’s elastic modulus and compressive strength. Soil samples are prepared in accordance with the ASTM standards, and tested at different times, in order to be able to emphasize the influence of the curing period on the variation of the mix’s mechanical properties and characteristics. As of today, the results obtained are very promising: the mix’s cohesion and friction angle vary in function of the bottom ash percentage, water content and curing period: the cohesion increases enormously before decreasing for a long curing period (values of mix’s cohesion are larger than intact soil’s cohesion) while internal friction angle keeps on increasing even when the curing period is 28 days (the tests largest curing period), which give us a better soil behavior: less cracks and better soil bearing capacity.

Keywords: bottom ash, Clayey soil, mechanical properties, tests

Procedia PDF Downloads 172
1010 Investigation of Turbulent Flow in a Bubble Column Photobioreactor and Consequent Effects on Microalgae Cultivation Using Computational Fluid Dynamic Simulation

Authors: Geetanjali Yadav, Arpit Mishra, Parthsarathi Ghosh, Ramkrishna Sen

Abstract:

The world is facing problems of increasing global CO2 emissions, climate change and fuel crisis. Therefore, several renewable and sustainable energy alternatives should be investigated to replace non-renewable fuels in future. Algae presents itself a versatile feedstock for the production of variety of fuels (biodiesel, bioethanol, bio-hydrogen etc.) and high value compounds for food, fodder, cosmetics and pharmaceuticals. Microalgae are simple microorganisms that require water, light, CO2 and nutrients for growth by the process of photosynthesis and can grow in extreme environments, utilize waste gas (flue gas) and waste waters. Mixing, however, is a crucial parameter within the culture system for the uniform distribution of light, nutrients and gaseous exchange in addition to preventing settling/sedimentation, creation of dark zones etc. The overarching goal of the present study is to improve photobioreactor (PBR) design for enhancing dissolution of CO2 from ambient air (0.039%, v/v), pure CO2 and coal-fired flue gas (10 ± 2%) into microalgal PBRs. Computational fluid dynamics (CFD), a state-of-the-art technique has been used to solve partial differential equations with turbulence closure which represents the dynamics of fluid in a photobioreactor. In this paper, the hydrodynamic performance of the PBR has been characterized and compared with that of the conventional bubble column PBR using CFD. Parameters such as flow rate (Q), mean velocity (u), mean turbulent kinetic energy (TKE) were characterized for each experiment that was tested across different aeration schemes. The results showed that the modified PBR design had superior liquid circulation properties and gas-liquid transfer that resulted in creation of uniform environment inside PBR as compared to conventional bubble column PBR. The CFD technique has shown to be promising to successfully design and paves path for a future research in order to develop PBRs which can be commercially available for scale-up microalgal production.

Keywords: computational fluid dynamics, microalgae, bubble column photbioreactor, flue gas, simulation

Procedia PDF Downloads 229
1009 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls

Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin

Abstract:

The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.

Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis

Procedia PDF Downloads 212
1008 Hansen Solubility Parameters, Quality by Design Tool for Developing Green Nanoemulsion to Eliminate Sulfamethoxazole from Contaminated Water

Authors: Afzal Hussain, Mohammad A. Altamimi, Syed Sarim Imam, Mudassar Shahid, Osamah Abdulrahman Alnemer

Abstract:

Exhaustive application of sulfamethoxazole (SUX) became as a global threat for human health due to water contamination through diverse sources. The addressed combined application of Hansen solubility (HSPiP software) parameters and Quality by Design tool for developing various green nanoemulsions. HSPiP program assisted to screen suitable excipients based on Hansen solubility parameters and experimental solubility data. Various green nanoemulsions were prepared and characterized for globular size, size distribution, zeta potential, and removal efficiency. Design Expert (DoE) software further helped to identify critical factors responsible to have direct impact on percent removal efficiency, size, and viscosity. Morphological investigation was visualized under transmission electron microscopy (TEM). Finally, the treated was studied to negate the presence of the tested drug employing ICP-OES (inductively coupled plasma optical emission microscopy) technique and HPLC (high performance liquid chromatography). Results showed that HSPiP predicted biocompatible lipid, safe surfactant (lecithin), and propylene glycol (PG). Experimental solubility of the drug in the predicted excipients were quite convincing and vindicated. Various green nanoemulsions were fabricated, and these were evaluated for in vitro findings. Globular size (100-300 nm), PDI (0.1-0.5), zeta potential (~ 25 mV), and removal efficiency (%RE = 70-98%) were found to be in acceptable range for deciding input factors with level in DoE. Experimental design tool assisted to identify the most critical variables controlling %RE and optimized content of nanoemulsion under set constraints. Dispersion time was varied from 5-30 min. Finally, ICP-OES and HPLC techniques corroborated the absence of SUX in the treated water. Thus, the strategy is simple, economic, selective, and efficient.

Keywords: quality by design, sulfamethoxazole, green nanoemulsion, water treatment, icp-oes, hansen program (hspip software

Procedia PDF Downloads 73
1007 Energy Content and Spectral Energy Representation of Wave Propagation in a Granular Chain

Authors: Rohit Shrivastava, Stefan Luding

Abstract:

A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder. For obtaining macroscopic/continuum properties, ensemble averaging has been used. Interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder leads to faster attenuation of the signal and decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies also increases. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits diffusive like propagation, which eventually becomes localized at long periods of time.

Keywords: discrete elements, energy attenuation, mass disorder, granular chain, spectral energy, wave propagation

Procedia PDF Downloads 281
1006 Need for Elucidation of Palaeoclimatic Variability in the High Himalayan Mountains: A Multiproxy Approach

Authors: Sheikh Nawaz Ali, Pratima Pandey, P. Morthekai, Jyotsna Dubey, Md. Firoze Quamar

Abstract:

The high mountain glaciers are one of the most sensitive recorders of climate changes, because they have the tendency to respond to the combined effect of snow fall and temperature. The Himalayan glaciers have been studied with a good pace during the last decade. However, owing to its large ecological diversity and geographical vividness, major part of the Indian Himalaya is uninvestigated, and hence the palaeoclimatic patterns as well as the chronology of past glaciations in particular remain controversial for the entire Indian Himalayan transect. Although the Himalayan glaciers are nourished by two important climatic systems viz. the southwest summer monsoon and the mid-latitude westerlies, however, the influence of these systems is yet to be understood. Nevertheless, existing chronology (mostly exposure ages) indicate that irrespective of the geographical position, glaciers seem to grow during enhanced Indian summer monsoon (ISM). The Himalayan mountain glaciers are referred to the third pole or water tower of Asia as they form a huge reservoir of the fresh water supplies for the Asian countries. Mountain glaciers are sensitive probes of the local climate, and, thus, they present an opportunity and a challenge to interpret climates of the past as well as to predict future changes. The principle object of all the palaeoclimatic studies is to develop a futuristic models/scenario. However, it has been found that the glacial chronologies bracket the major phases of climatic events only, and other climatic proxies are sparse in Himalaya. This is the reason that compilation of data for rapid climatic change during the Holocene shows major gaps in this region. The sedimentation in proglacial lakes, conversely, is more continuous and, hence, can be used to reconstruct a more complete record of past climatic variability that is modulated by changing ice volume of the valley glacier. The Himalayan region has numerous proglacial lacustrine deposits formed during the late Quaternary period. However, there are only few such deposits which have been studied so far. Therefore, this is the high time when efforts have to be made to systematically map the moraines located in different climatic zones, reconstruct the local and regional moraine stratigraphy and use multiple dating techniques to bracket the events of glaciation. Besides this, emphasis must be given on carrying multiproxy studies on the lacustrine sediments that will provide a high resolution palaeoclimatic data from the alpine region of the Himalaya. Although the Himalayan glaciers fluctuated in accordance with the changing climatic conditions (natural forcing), however, it is too early to arrive at any conclusion. It is very crucial to generate multiproxy data sets covering wider geographical and ecological domains taking into consideration multiple parameters that directly or indirectly influence the glacier mass balance as well as the local climate of a region.

Keywords: glacial chronology, palaeoclimate, multiproxy, Himalaya

Procedia PDF Downloads 259
1005 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 87
1004 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 154
1003 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis

Authors: Avi Shrivastava

Abstract:

In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.

Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine

Procedia PDF Downloads 66
1002 Research on Structural Changes in Plastic Deformation during Rolling and Crimping of Tubes

Authors: Hein Win Zaw

Abstract:

Today, the advanced strategies for aircraft production technology potentially need the higher performance, and on the other hand, those strategies and engineering technologies should meet considerable process and reduce of production costs. Thus, professionals who are working in these scopes are attempting to develop new materials to improve the manufacturability of designs, the creation of new technological processes, tools and equipment. This paper discusses about the research on structural changes in plastic deformation during rotary expansion and crimp of pipes. Pipelines are experiencing high pressure and pulsating load. That is why, it is high demands on the mechanical properties of the material, the quality of the external and internal surfaces, preserve cross-sectional shape and the minimum thickness of the pipe wall are taking into counts. In the manufacture of pipes, various operations: distribution, crimping, bending, etc. are used. The most widely used at various semi-products, connecting elements found the process of rotary expansion and crimp of pipes. In connection with the use of high strength materials and less-plastic, these conventional techniques do not allow obtaining high-quality parts, and also have a low economic efficiency. Therefore, research in this field is relevantly considerable to develop in advanced. Rotary expansion and crimp of pipes are accompanied by inhomogeneous plastic deformation, which leads to structural changes in the material, causes its deformation hardening, by this result changes the operational reliability of the product. Parts of the tube obtained by rotary expansion and crimp differ by multiplicity of form and characterized by various diameter in the various section, which formed in the result of inhomogeneous plastic deformation. The reliability of the coupling, obtained by rotary expansion and crimp, is determined by the structural arrangement of material formed by the formation process; there is maximum value of deformation, the excess of which is unacceptable. The structural state of material in this condition is determined by technological mode of formation in the rotary expansion and crimp. Considering the above, objective of the present study is to investigate the structural changes at different levels of plastic deformation, accompanying rotary expansion and crimp, and the analysis of stress concentrators of different scale levels, responsible for the formation of the primary zone of destruction.

Keywords: plastic deformation, rolling of tubes, crimping of tubes, structural changes

Procedia PDF Downloads 325
1001 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry

Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc

Abstract:

Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.

Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning

Procedia PDF Downloads 511