Search results for: step leaching
576 Enhancing the Quality of Silage Bales Produced by a Commercial Scale Silage Producer in Northern province, Sri Lanka: A Step Toward Supporting Smallholder Dairy Farmers in the Northern Province Sri Lanka
Authors: Harithas Aruchchunan
Abstract:
Silage production is an essential aspect of dairy farming, used to provide high-quality feed to ruminants. However, dairy farmers in Northern Province Sri Lanka are facing multiple challenges that compromise the quality and quantity of silage produced. To tackle these challenges, promoting silage feeding has become an essential component of sustainable dairy farming practices. In this study, silage bale samples were collected from a newly started silage baling factory in Jaffna, Northern province and their quality was analysed at the Veterinary Research Institute laboratory in Kandy in March 2023. The results show the nutritional composition of three Napier grass cultivars: Super Napier, CO6, and Indian Red Napier (BH18). The main parameters analysed were dry matter, pH, lactic acid, soluble carbohydrate, ammonia nitrogen, ash, crude protein, NDF, and ADF. The results indicate that Super Napier and CO6 have higher crude protein content and lower ADF levels, making them suitable for producing high-quality silage. The pH levels of all three cultivars were safe, and the ammonia nitrogen levels were considered appropriate. However, laboratory results indicate that the quality of silage bales produced can be further enhanced. Dairy farmers should be encouraged to adopt these cultivars to achieve better yields as they are high in protein and are better suited to Northern Province's soil and climate. Therefore, it is vital to educate small-scale fodder producers, who supply the raw material to silage factories, on the best practices of cultivating these new cultivars. To improve silage bale production and quality in Northern Province Sri Lanka, we recommend increasing public awareness about silage feeding, providing education and training to dairy farmers and small-scale fodder producers on modern silage production techniques and improving the availability of raw materials for silage production. Additionally, Napier grass cultivars need to be promoted among dairy farmers for better production and quality of silage bales. Failing to improve the quality and quantity of silage bale production could not only lead to the decline of dairy farming in Northern Province Sri Lanka but also the negative impact on the economyKeywords: silage bales, dairy farming, economic crisis, Sri Lanka
Procedia PDF Downloads 92575 Three-Dimensional Fluid-Structure-Thermal Coupling Dynamics Simulation Model of a Gas-Filled Fluid-Resistance Damper and Experimental Verification
Authors: Wenxue Xu
Abstract:
Fluid resistance damper is an important damping element to attenuate vehicle vibration. It converts vibration energy into thermal energy dissipation through oil throttling. It is a typical fluid-solid-heat coupling problem. A complete three-dimensional flow-structure-thermal coupling dynamics simulation model of a gas-filled fluid-resistance damper was established. The flow-condition-based interpolation (FCBI) method and direct coupling calculation method, the unit's FCBI-C fluid numerical analysis method and iterative coupling calculation method are used to achieve the damper dynamic response of the piston rod under sinusoidal excitation; the air chamber inflation pressure, spring compression characteristics, constant flow passage cross-sectional area and oil parameters, etc. The system parameters, excitation frequency, and amplitude and other excitation parameters are analyzed and compared in detail for the effects of differential pressure characteristics, velocity characteristics, flow characteristics and dynamic response of valve opening, floating piston response and piston rod output force characteristics. Experiments were carried out on some simulation analysis conditions. The results show that the node-based FCBI (flow-condition-based interpolation) fluid numerical analysis method and direct coupling calculation method can better guarantee the conservation of flow field calculation, and the calculation step is larger, but the memory is also larger; if the chamber inflation pressure is too low, the damper will become cavitation. The inflation pressure will cause the speed characteristic hysteresis to increase, and the sealing requirements are too strict. The spring compression characteristics have a great influence on the damping characteristics of the damper, and reasonable damping characteristic needs to properly design the spring compression characteristics; the larger the cross-sectional area of the constant flow channel, the smaller the maximum output force, but the more stable when the valve plate is opening.Keywords: damper, fluid-structure-thermal coupling, heat generation, heat transfer
Procedia PDF Downloads 144574 Motherhood Constrained: The Minotaur Legend Reimagined Through the Perspective of Marginalized Mothers
Authors: Gevorgianiene Violeta, Sumskiene Egle
Abstract:
Background. Child removal is a profound and life-altering measure that significantly impacts both children and their mothers. Unfortunately, mothers with intellectual disabilities are disproportionately affected by the removal of their children. This action is often taken due to concerns about the mother's perceived inability to care for the child, instances of abuse and neglect, or struggles with addiction. In many cases, the failure to meet society's standards of a "good mother" is seen as a deviation from conventional norms of femininity and motherhood. From an institutional perspective, separating a child from their mother is sometimes viewed as a step toward restoring justice or doing what is considered "right." In another light, this act of child removal can be seen as the removal of a mother from her child, an attempt to shield society from the complexities and fears associated with motherhood for women with disabilities. This separation can be likened to the Greek legend of the Minotaur, a fearsome beast confined within an impenetrable labyrinth. By reimagining this legend, we can see the social fears surrounding 'mothering with intellectual disability' as deeply sealed within an unreachable place. The Aim of this Presentation. Our goal with this presentation is to draw from our research and the metaphors found in the Greek legend to delve into the profound challenges faced by mothers with intellectual disabilities in raising their children. These challenges often become entangled within an insurmountable labyrinth, including navigating complex institutional bureaucracies, enduring persistent doubts cast upon their maternal competencies, battling unfavorable societal narratives, and struggling to retain custody of their children. Coupled with limited social support networks, these challenges frequently lead to situations resulting in maternal failure and, ultimately, child removal. On a broader scale, this separation of a child from their mother symbolizes society’s collective avoidance of confronting the issue of 'mothering with disability,' which can only be effectively addressed through united efforts. Conclusion. Just as in the labyrinth of the Minotaur legend, the struggles faced by mothers with disabilities in their pursuit of retaining their children reveal the need for a metaphorical 'string of Ariadne.' This string symbolizes the support offered by social service providers, communities, and the loved ones these women often dream of but rarely encounter in their lives.Keywords: motherhood, disability, child removal, support.
Procedia PDF Downloads 58573 Variable Mapping: From Bibliometrics to Implications
Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek
Abstract:
Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.Keywords: bibliometrics, literature review, science mapping, variable mapping
Procedia PDF Downloads 120572 UKIYO-E: User Knowledge Improvement Based on Youth Oriented Entertainment, Art Appreciation Support by Interacting with Picture
Authors: Haruya Tamaki, Tsugunosuke Sakai, Ryuichi Yoshida, Ryohei Egusa, Shigenori Inagaki, Etsuji Yamaguchi, Fusako Kusunoki, Miki Namatame, Masanori Sugimoto, Hiroshi Mizoguchi
Abstract:
Art appreciation is important as part of children education. Art appreciation can enrich sensibility and creativity. To enrich sensibility and creativity, the children have to learning knowledge of picture such as social and historical backgrounds and author intention. High learning effect can acquire by actively learning. In short, it is important that encourage learning of the knowledge about pictures actively. It is necessary that children feel like interest to encourage learning of the knowledge about pictures actively. In a general art museum, comments on pictures are done through writing. Thus, we expect that this method cannot arouse the interest of the children in pictures, because children feel like boring. In brief, learning about the picture information is difficult. Therefore, we are developing an art-appreciation support system that will encourage learning of the knowledge about pictures actively by children feel like interest. This system uses that Interacting with Pictures to learning of the knowledge about pictures. To Interacting with Pictures, children have to utterance by themselves. We expect that will encourage learning of the knowledge about pictures actively by Interacting with Pictures. To more actively learning, children can choose who talking with by information that location and movement of the children. This system must be able to acquire real-time knowledge of the location, movement, and voice of the children. We utilize the Microsoft’s Kinect v2 sensor and its library, namely, Kinect for Windows SDK and Speech Platform SDK v11 for this purpose. By using these sensor and library, we can determine the location, movement, and voice of the children. As the first step of this system, we developed ukiyo-e game that use ukiyo-e to appreciation object. Ukiyo-e is a traditional Japanese graphic art that has influenced the western society. Therefore, we believe that the ukiyo-e game will be appreciated. In this study, we applied talking to pictures to learn information about the pictures because we believe that learning information about the pictures by talking to the pictures is more interesting than commenting on the pictures using only texts. However, we cannot confirm if talking to the pictures is more interesting than commenting using texts only. Thus, we evaluated through EDA measurement whether the user develops an interest in the pictures while talking to them using voice recognition or by commenting on the pictures using texts only. Hence, we evaluated that children have interest to picture while talking to them using voice recognition through EDA measurement. In addition, we quantitatively evaluate that enjoyed this game or not and learning information about the pictures for primary schoolchildren. In this paper, we summarize these two evaluation results.Keywords: actively learning, art appreciation, EDA, Kinect V2
Procedia PDF Downloads 285571 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics
Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich
Abstract:
Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes
Procedia PDF Downloads 75570 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business
Authors: Kritchakhris Na-Wattanaprasert
Abstract:
The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.Keywords: key performance indicator, warehouse management, warehouse operation, logistics management
Procedia PDF Downloads 431569 Neural Networks Underlying the Generation of Neural Sequences in the HVC
Authors: Zeina Bou Diab, Arij Daou
Abstract:
The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird
Procedia PDF Downloads 71568 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case
Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza
Abstract:
The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype
Procedia PDF Downloads 418567 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains
Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron
Abstract:
Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum
Procedia PDF Downloads 79566 The Situation in Afghanistan as a Step Forward in Putting an End to Impunity
Authors: Jelena Radmanovic
Abstract:
On 5 March 2020, the International Criminal Court has decided to authorize the investigation into the crimes allegedly committed on the territory of Afghanistan after 1 May 2003. The said determination has raised several controversies, including the recently imposed sanctions by the United States, furthering the United States' long-standing rejection of the authority of the International Criminal Court. The purpose of this research is to address the said investigation in light of its importance for the prevention of impunity in the cases where the perpetrators are nationals of Non-Party States to the Rome Statute. Difficulties that the International Criminal Court has been facing, concerning the establishment of its jurisdiction in those instances where an involved state is not a Party to the Rome Statute, have become the most significant stumbling block undermining the importance, integrity, and influence of the Court. The Situation in Afghanistan raises even further concern, bearing in mind that the Prosecutor’s Request for authorization of an investigation pursuant to article 15 from 20 November 2017 has initially been rejected with the ‘interests of justice’ as an applied rationale. The first method used for the present research is the description of the actual events regarding the aforementioned decisions and the following reactions in the international community, while with the second method – the method of conceptual analysis, the research will address the decisions pertaining to the International Criminal Court’s jurisdiction and will attempt to address the mentioned Decision of 5 March 2020 as an example of good practice and a precedent that should be followed in all similar situations. The research will attempt parsing the reasons used by the International Criminal Court, giving rather greater attention to the latter decision that has authorized the investigation and the points raised by the officials of the United States. It is a find of this research that the International Criminal Court, together with other similar judicial instances (Nuremberg and Tokyo Tribunals, The International Criminal Tribunal for the former Yugoslavia, The International Criminal Tribunal for Rwanda), has presented the world with the possibility of non-impunity, attempting to prosecute those responsible for the gravest of crimes known to the humanity and has shown that such persons should not enjoy the benefits of their immunities, with its focus primarily on the victims of such crimes. Whilst it is an issue that will most certainly be addressed further in the future, with the situations that will be brought before the International Criminal Court, the present research will make an attempt at pointing to the significance of the situation in Afghanistan, the International Criminal Court as such and the international criminal justice as a whole, for the purpose of putting an end to impunity.Keywords: Afghanistan, impunity, international criminal court, sanctions, United States
Procedia PDF Downloads 127565 In vitro Establishment and Characterization of Oral Squamous Cell Carcinoma Derived Cancer Stem-Like Cells
Authors: Varsha Salian, Shama Rao, N. Narendra, B. Mohana Kumar
Abstract:
Evolving evidence proposes the existence of a highly tumorigenic subpopulation of undifferentiated, self-renewing cancer stem cells, responsible for exhibiting resistance to conventional anti-cancer therapy, recurrence, metastasis and heterogeneous tumor formation. Importantly, the mechanisms exploited by cancer stem cells to resist chemotherapy are very less understood. Oral squamous cell carcinoma (OSCC) is one of the most regularly diagnosed cancer types in India and is associated commonly with alcohol and tobacco use. Therefore, the isolation and in vitro characterization of cancer stem-like cells from patients with OSCC is a critical step to advance the understanding of the chemoresistance processes and for designing therapeutic strategies. With this, the present study aimed to establish and characterize cancer stem-like cells in vitro from OSCC. The primary cultures of cancer stem-like cell lines were established from the tissue biopsies of patients with clinical evidence of an ulceroproliferative lesion and histopathological confirmation of OSCC. The viability of cells assessed by trypan blue exclusion assay showed more than 95% at passage 1 (P1), P2 and P3. Replication rate was performed by plating cells in 12-well plate and counting them at various time points of culture. Cells had a more marked proliferative activity and the average doubling time was less than 20 hrs. After being cultured for 10 to 14 days, cancer stem-like cells gradually aggregated and formed sphere-like bodies. More spheroid bodies were observed when cultured in DMEM/F-12 under low serum conditions. Interestingly, cells with higher proliferative activity had a tendency to form more sphere-like bodies. Expression of specific markers, including membrane proteins or cell enzymes, such as CD24, CD29, CD44, CD133, and aldehyde dehydrogenase 1 (ALDH1) is being explored for further characterization of cancer stem-like cells. To summarize the findings, the establishment of OSCC derived cancer stem-like cells may provide scope for better understanding the cause for recurrence and metastasis in oral epithelial malignancies. Particularly, identification and characterization studies on cancer stem-like cells in Indian population seem to be lacking thus provoking the need for such studies in a population where alcohol consumption and tobacco chewing are major risk habits.Keywords: cancer stem-like cells, characterization, in vitro, oral squamous cell carcinoma
Procedia PDF Downloads 221564 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure
Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik
Abstract:
Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT
Procedia PDF Downloads 128563 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection
Authors: Ali Hamza
Abstract:
Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network
Procedia PDF Downloads 84562 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques
Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh
Abstract:
In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network
Procedia PDF Downloads 71561 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food
Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite
Abstract:
The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction
Procedia PDF Downloads 195560 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 221559 Depictions of Human Cannibalism and the Challenge They Pose to the Understanding of Animal Rights
Authors: Desmond F. Bellamy
Abstract:
Discourses about animal rights usually assume an ontological abyss between human and animal. This supposition of non-animality allows us to utilise and exploit non-humans, particularly those with commercial value, with little regard for their rights or interests. We can and do confine them, inflict painful treatments such as castration and branding, and slaughter them at an age determined only by financial considerations. This paper explores the way images and texts depicting human cannibalism reflect this deprivation of rights back onto our species and examines how this offers new perspectives on our granting or withholding of rights to farmed animals. The animals we eat – sheep, pigs, cows, chickens and a small handful of other species – are during processing de-animalised, turned into commodities, and made unrecognisable as formerly living beings. To do the same to a human requires the cannibal to enact another step – humans must first be considered as animals before they can be commodified or de-animalised. Different iterations of cannibalism in a selection of fiction and non-fiction texts will be considered: survivalism (necessitated by catastrophe or dystopian social collapse), the primitive savage of colonial discourses, and the inhuman psychopath. Each type of cannibalism shows alternative ways humans can be animalised and thereby dispossessed of both their human and animal rights. Human rights, summarised in the UN Universal Declaration of Human Rights as ‘life, liberty, and security of person’ are stubbornly denied to many humans, and are refused to virtually all farmed non-humans. How might this paradigm be transformed by seeing the animal victim replaced by an animalised human? People are fascinated as well as repulsed by cannibalism, as demonstrated by the upsurge of films on the subject in the last few decades. Cannibalism is, at its most basic, about envisaging and treating humans as objects: meat. It is on the dinner plate that the abyss between human and ‘animal’ is most challenged. We grasp at a conscious level that we are a species of animal and may become, if in the wrong place (e.g., shark-infested water), ‘just food’. Culturally, however, strong traditions insist that humans are much more than ‘just meat’ and deserve a better fate than torment and death. The billions of animals on death row awaiting human consumption would ask the same if they could. Depictions of cannibalism demonstrate in graphic ways that humans are animals, made of meat and that we can also be butchered and eaten. These depictions of us as having the same fleshiness as non-human animals reminds us that they have the same capacities for pain and pleasure as we do. Depictions of cannibalism, therefore, unconsciously aid in deconstructing the human/animal binary and give a unique glimpse into the often unnoticed repudiation of animal rights.Keywords: animal rights, cannibalism, human/animal binary, objectification
Procedia PDF Downloads 138558 Gender Equality at Workplace in Iran - Strategies and Successes Against Systematic Bias
Authors: Leila Sadeghi
Abstract:
Gender equality is a critical concern in the workplace, particularly in Iran, where legal and social barriers contribute to significant disparities. This abstract presents a case study of Dahi Bondad Co., a company based in Tehran, Iran that recognized the urgency of addressing the gender gap within its organization. Through a comprehensive investigation, the company identified issues related to biased recruitment, pay disparities, promotion biases, internal barriers, and everyday boundaries. This abstract highlights the strategies implemented by Dahi Bondad Co. to combat these challenges and foster gender equality. The company revised its recruitment policies, eliminated gender-specific language in job advertisements, and implemented blind resume screening to ensure equal opportunities for all applicants. Comprehensive pay equity analyses were conducted, leading to salary adjustments based on qualifications and experience to rectify pay disparities. Clear and transparent promotion criteria were established, and training programs were provided to decision-makers to raise awareness about unconscious biases. Additionally, mentorship and coaching programs were introduced to support female employees in overcoming self-limiting beliefs and imposter syndrome. At the same time, practical workshops and gamification techniques were employed to boost confidence and encourage women to step out of their comfort zones. The company also recognized the importance of dress codes and allowed optional hijab-wearing, respecting local traditions while promoting individual freedom. As a result of these strategies, Dahi Bondad Co. successfully fostered a more equitable and empowering work environment, leading to increased job satisfaction for both male and female employees within a short timeframe. This case study serves as an example of practical approaches that human resource managers can adopt to address gender inequality in the workplace, providing valuable insights for organizations seeking to promote gender equality in similar contexts.Keywords: gender equality, human resource strategies, legal barrier, social barrier, successful result, successful strategies, workplace in Iran
Procedia PDF Downloads 67557 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 246556 The Impact of External Technology Acquisition and Exploitation on Firms' Process Innovation Performance
Authors: Thammanoon Charmjuree, Yuosre F. Badir, Umar Safdar
Abstract:
There is a consensus among innovation scholars that knowledge is a vital antecedent for firm’s innovation; e.g., process innovation. Recently, there has been an increasing amount of attention to more open approaches to innovation. This open model emphasizes the use of purposive flows of knowledge across the organization boundaries. Firms adopt open innovation strategy to improve their innovation performance by bringing knowledge into the organization (inbound open innovation) to accelerate internal innovation or transferring knowledge outside (outbound open innovation) to expand the markets for external use of innovation. Reviewing open innovation research reveals the following. First, the majority of existing studies have focused on inbound open innovation and less on outbound open innovation. Second, limited research has considered the possible interaction between both and how this interaction may impact the firm’s innovation performance. Third, scholars have focused mainly on the impact of open innovation strategy on product innovation and less on process innovation. Therefore, our knowledge of the relationship between firms’ inbound and outbound open innovation and how these two impact process innovation is still limited. This study focuses on the firm’s external technology acquisition (ETA) and external technology exploitation (ETE) and the firm’s process innovation performance. The ETA represents inbound openness in which firms rely on the acquisition and absorption of external technologies to complement their technology portfolios. The ETE, on the other hand, refers to commercializing technology assets exclusively or in addition to their internal application. This study hypothesized that both ETA and ETE have a positive relationship with process innovation performance and that ETE fully mediates the relationship between ETA and process innovation performance, i.e., ETA has a positive impact on ETE, and turn, ETE has a positive impact on process innovation performance. This study empirically explored these hypotheses in software development firms in Thailand. These firms were randomly selected from a list of Software firms registered with the Department of Business Development, Ministry of Commerce of Thailand. The questionnaires were sent to 1689 firms. After follow-ups and periodic reminders, we obtained 329 (19.48%) completed usable questionnaires. The structure question modeling (SEM) has been used to analyze the data. An analysis of the outcome of 329 firms provides support for our three hypotheses: First, the firm’s ETA has a positive impact on its process innovation performance. Second, the firm’s ETA has a positive impact its ETE. Third, the firm’s ETE fully mediates the relationship between the firm’s ETA and its process innovation performance. This study fills up the gap in open innovation literature by examining the relationship between inbound (ETA) and outbound (ETE) open innovation and suggest that in order to benefits from the promises of openness, firms must engage in both. The study went one step further by explaining the mechanism through which ETA influence process innovation performance.Keywords: process innovation performance, external technology acquisition, external technology exploitation, open innovation
Procedia PDF Downloads 203555 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 144554 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 95553 Understanding the Relationship between Community and the Preservation of Cultural Landscape - Focusing on Organically Evolved Landscapes
Authors: Adhithy Menon E., Biju C. A.
Abstract:
Heritage monuments were first introduced to the public in the 1960s when the concept of preserving them was introduced. As a result of the 1990s, the concept of cultural landscapes gained importance, emphasizing the importance of culture and heritage in the context of the landscape. It is important to note that this paper is primarily concerned with the second category of ecological landscapes, which is organically evolving landscapes, as they represent a complex network of tangible, intangible, and environment, and the connections they share with the communities in which they are situated. The United Nations Educational, Scientific, and Cultural Organization has identified 39 cultural sites as being in danger, including the Iranian city of Bam and the historic city of Zabid in Yemen. To ensure its protection in the future, it is necessary to conduct a detailed analysis of the factors contributing to this degradation. An analysis of selected cultural landscapes from around the world is conducted to determine which parameters cause their degradation. The paper follows the objectives of understanding cultural landscapes and their importance for development, followed by examining various criteria for identifying cultural landscapes, their various classifications, as well as agencies that focus on their protection. To identify and analyze the parameters contributing to the deterioration of cultural landscapes based on literature and case studies (cultural landscape of Sintra, Rio de Janeiro, and Varanasi). As a final step, strategies should be developed to enhance deteriorating cultural landscapes based on these parameters. The major findings of the study are the impact of community in the parameters derived - integrity (natural factors, natural disasters, demolition of structures, deterioration of materials), authenticity (living elements, sense of place, building techniques, religious context, artistic expression) public participation (revenue, dependence on locale), awareness (demolition of structures, resource management) disaster management, environmental impact, maintenance of cultural landscape (linkages with other sites, dependence on locale, revenue, resource management). The parameters of authenticity, public participation, awareness, and maintenance of the cultural landscape are directly related to the community in which the cultural landscape is located. Therefore, by focusing on the community and addressing the parameters identified, the deterioration curve of cultural landscapes can be altered.Keywords: community, cultural landscapes, heritage, organically evolved, public participation
Procedia PDF Downloads 87552 Isolation, Characterization, and Antibacterial Evaluation of Antimicrobial Peptides and Derivatives from Fly Larvae Sarconesiopsis magellanica (Diptera: Calliphoridae)
Authors: A. Díaz-Roa, P. I. Silva Junior, F. J. Bello
Abstract:
Sarconesiopsis magellanica (Diptera: Calliphoridae) is a medically important necrophagous fly which is used for establishing the post-mortem interval. Dipterous maggots release diverse proteins and peptides contained in larval excretion and secretion (ES) products playing a key role in digestion. The most important mechanism for combating infection using larval therapy depends on larval ES. These larvae are protected against infection by a diverse spectrum of antimicrobial peptides (AMPs), one already known like lucifensin. Special interest in these peptides has also been aroused regarding understanding their role in wound healing since they degrade necrotic tissue and kill different bacteria during larval therapy. The action of larvae on wounds occurs through 3 mechanisms of action: removal of necrotic tissue, stimulation of granulation tissue, and antibacterial action of larval ES. Some components of the ES include calcium, urea, allantoin ammonium bicarbonate and reducing the viability of Gram positive and Gram negative bacteria. The Lucilia sericata fly larvae have been the most used, however, we need to evaluate new species that could potentially be similar or more effective than fly above. This study was thus aimed at identifying and characterizing S. magellanica AMPs contained in ES products for the first time and compared them with the common fly used L. sericata. These products were obtained from third-instar larvae taken from a previously established colony. For the first analysis, ES fractions were separate by Sep-Pak C18 disposable columns (first step). The material obtained was fractionated by RP-HPLC by using Júpiter C18 semi-preparative column. The products were then lyophilized and their antimicrobial activity was characterized by incubation with different bacterial strains. The first chromatographic analysis of ES from L. sericata gives 6 fractions with antimicrobial activity against Gram-positive bacteria Micrococus luteus, and 3 fractions with activity against Gram-negative bacteria Pseudomonae aeruginosa while the one from S. magellanica gaves 1 fraction against M. luteus and 4 against P. aeruginosa. Maybe one of these fractions could correspond to the peptide already known from L. sericata. These results show the first work for supporting further experiments aimed at validating S. magellanica use in larval therapy. We still need to search if we find some new molecules, by making mass spectrometry and ‘de novo sequencing’. Further studies are necessary to identify and characterize them to better understand their functioning.Keywords: antimicrobial peptides, larval therapy, Lucilia sericata, Sarconesiopsis magellanica
Procedia PDF Downloads 367551 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses
Procedia PDF Downloads 66550 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell
Authors: Deborah Eric, Abbas Ahmad Khan
Abstract:
Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation
Procedia PDF Downloads 165549 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring
Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield
Abstract:
Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2
Procedia PDF Downloads 223548 Attributable Mortality of Nosocomial Infection: A Nested Case Control Study in Tunisia
Authors: S. Ben Fredj, H. Ghali, M. Ben Rejeb, S. Layouni, S. Khefacha, L. Dhidah, H. Said
Abstract:
Background: The Intensive Care Unit (ICU) provides continuous care and uses a high level of treatment technologies. Although developed country hospitals allocate only 5–10% of beds in critical care areas, approximately 20% of nosocomial infections (NI) occur among patients treated in ICUs. Whereas in the developing countries the situation is still less accurate. The aim of our study is to assess mortality rates in ICUs and to determine its predictive factors. Methods: We carried out a nested case-control study in a 630-beds public tertiary care hospital in Eastern Tunisia. We included in the study all patients hospitalized for more than two days in the surgical or medical ICU during the entire period of the surveillance. Cases were patients who died before ICU discharge, whereas controls were patients who survived to discharge. NIs were diagnosed according to the definitions of ‘Comité Technique des Infections Nosocomiales et les Infections Liées aux Soins’ (CTINLIS, France). Data collection was based on the protocol of Rea-RAISIN 2009 of the National Institute for Health Watch (InVS, France). Results: Overall, 301 patients were enrolled from medical and surgical ICUs. The mean age was 44.8 ± 21.3 years. The crude ICU mortality rate was 20.6% (62/301). It was 35.8% for patients who acquired at least one NI during their stay in ICU and 16.2% for those without any NI, yielding an overall crude excess mortality rate of 19.6% (OR= 2.9, 95% CI, 1.6 to 5.3). The population-attributable fraction due to ICU-NI in patients who died before ICU discharge was 23.46% (95% CI, 13.43%–29.04%). Overall, 62 case-patients were compared to 239 control patients for the final analysis. Case patients and control patients differed by age (p=0,003), simplified acute physiology score II (p < 10-3), NI (p < 10-3), nosocomial pneumonia (p=0.008), infection upon admission (p=0.002), immunosuppression (p=0.006), days of intubation (p < 10-3), tracheostomy (p=0.004), days with urinary catheterization (p < 10-3), days with CVC ( p=0.03), and length of stay in ICU (p=0.003). Multivariate analysis demonstrated 3 factors: age older than 65 years (OR, 5.78 [95% CI, 2.03-16.05] p=0.001), duration of intubation 1-10 days (OR, 6.82 [95% CI, [1.90-24.45] p=0.003), duration of intubation > 10 days (OR, 11.11 [95% CI, [2.85-43.28] p=0.001), duration of CVC 1-7 days (OR, 6.85[95% CI, [1.71-27.45] p=0.007) and duration of CVC > 7 days (OR, 5.55[95% CI, [1.70-18.04] p=0.004). Conclusion: While surveillance provides important baseline data, successful trials with more active intervention protocols, adopting multimodal approach for the prevention of nosocomial infection incited us to think about the feasibility of similar trial in our context. Therefore, the implementation of an efficient infection control strategy is a crucial step to improve the quality of care.Keywords: intensive care unit, mortality, nosocomial infection, risk factors
Procedia PDF Downloads 407547 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 65