Search results for: deep layer
369 The Debureaucratization Strategy for the Portuguese Health Service through Effective Communication
Authors: Fernando Araujo, Sandra Cardoso, Fátima Fonseca, Sandra Cavaca
Abstract:
A debureaucratization strategy for the Portuguese Health Service was assumed by the Executive Board of the SNS, in deep articulation with the Shared Services of the Ministry of Health. Two of the main dimensions were focused on sick leaves (SL), that transform primary health care (PHC) in administrative institutions, limiting access to patients. The self-declaration of illness (SDI) project, through the National Health Service Contact Centre (SNS24), began on May 1, 2023, and has already resulted in the issuance of more than 300,000 SDI without the need to allocate resources from the National Health Service (NHS). This political decision allows each citizen, in a maximum 2 times/year, and 3 days each time, if ill, through their own responsibility, report their health condition in a dematerialized way, and by this way justified the absence to work, although by Portuguese law in these first three days, there is no payment of salary. Using a digital approach, it is now feasible without the need to go to the PHC and occupy the time of the PHC only to obtain an SL. Through this measure, bureaucracy has been reduced, and the system has been focused on users, improving the lives of citizens and reducing the administrative burden on PHC, which now has more consultation times for users who need it. The second initiative, which began on March 1, 2024, allows the SL to be issued in emergency departments (ED) of public hospitals and in the health institutions of the social and private sectors. This project is intended to allow the user who has suffered a situation of acute urgent illness and who has been observed in an ED of a public hospital or in a private or social entity no longer need to go to PHC only to apply for the respective SL. Since March 1, 54,453 SLs have been issued, 242 in private or social sector institutions and 6,918 in public hospitals, of which 134 were in ED and 47,292 in PHC. This approach has proven to be technically robust, allows immediate resolution of problems and differentiates the performance of doctors. However, it is important to continue to qualify the proper functioning of the ED, preventing non-urgent users from going there only to obtain SL. Thus, in order to make better use of existing resources, it was operationalizing this extension of its issuance in a balanced way, allowing SL to be issued in the ED of hospitals only to critically ill patients or patients referred by INEM, SNS24, or PHC. In both cases, an intense public campaign was implemented to explain the way it works and the benefits for patients. In satisfaction surveys, more than 95% of patients and doctors were satisfied with the solutions, asking for extensions to other areas. The administrative simplification agenda of the NHS continues its effective development. For the success of this debureaucratization agenda, the key factors are effective communication and the ability to reach patients and health professionals in order to increase health literacy and the correct use of NHS.Keywords: debureaucratization strategy, self-declaration of illness, sick leaves, SNS24
Procedia PDF Downloads 71368 The Second Column of Origen’s Hexapla and the Transcription of BGDKPT Consonants: A Confrontation with Transliterated Hebrew Names in Greek Documents
Authors: Isabella Maurizio
Abstract:
This research analyses the pronunciation of Hebrew consonants 'bgdkpt' in II- III C. E. in Palestine, through the confrontation of two kinds of data: the fragments of transliteration of Old Testament in the Greek alphabet, from the second column of Origen’s synopsis, called Hexapla, and Hebrew names transliterated in Greek documents, especially epigraphs. Origen is a very important author, not only for his bgdkpt theological and exegetic works: the Hexapla, synoptic six columns for a critical edition of Septuaginta, has a relevant role in attempting to reconstruct the pronunciation of Hebrew language before Masoretic punctuation. For this reason, at the beginning, it is important to analyze the column in order to study phonetic and linguistic phenomena. Among the most problematic data, there is the evidence from bgdkpt consonants, always represented as Greek aspirated graphemes. This transcription raised the question if their pronunciation was the only spirant, and consequently, the double one, that is, the stop/spirant contrast, was introduced by Masoretes. However, the phonetic and linguistic examination of the column alone is not enough to establish a real pronunciation of language: this paper is significant because a confrontation between the second column’s transliteration and Hebrew names found in Greek documents epigraphic ones mainly, is achieved. Palestine in II - III was a bilingual country: Greek and Aramaic language lived together, the first one like the official language, the second one as the principal mean of communication between people. For this reason, Hebrew names are often found in Greek documents of the same geographical area: a deep examination of bgdkpt’s transliteration can help to understand better which the real pronunciation of these consonants was, or at least it allows to evidence a phonetic tendency. As a consequence, the research considers the contemporary documents to Origen and the previous ones: the first ones testify a specific stadium of pronunciation, the second ones reflect phonemes’ evolution. Alexandrian documents are also examined: Origen was from there, and the influence of Greek language, spoken in his native country, must be considered. The epigraphs have another implication: they are totally free from morphological criteria, probably used by Origen in his column, because of their popular origin. Thus, a confrontation between the hexaplaric transliteration and Hebrew names is absolutely required, in Hexapla’s studies: first of all, it can be the second clue of a pronunciation already noted in the column; then because, for documents’ specific nature, it has more probabilities to be real, reflecting a daily use of language. The examination of data shows a general tendency to employ the aspirated graphemes for bgdkpt consonants’ transliteration. This probably means that they were closer to Greek aspirated consonants rather than to the plosive ones. The exceptions are linked to a particular status of the name, i.e. its history and origin. In this way, this paper gives its contribution to onomastic studies, too: indeed, the research may contribute to verify the diffusion and the treatment of Jewish names in Hellenized world and in the koinè language.Keywords: bgdkpt consonants, Greek epigraphs, Jewish names, origen's Hexapla
Procedia PDF Downloads 139367 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 121366 Effects of Stokes Shift and Purcell Enhancement in Fluorescence Assisted Radiative Cooling
Authors: Xue Ma, Yang Fu, Dangyuan Lei
Abstract:
Passive daytime radiative cooling is an emerging technology which has attracted worldwide attention in recent years due to its huge potential in cooling buildings without the use of electricity. Various coating materials with different optical properties have been developed to improve the daytime radiative cooling performance. However, commercial cooling coatings comprising functional fillers with optical bandgaps within the solar spectral range suffers from severe intrinsic absorption, limiting their cooling performance. Fortunately, it has recently been demonstrated that introducing fluorescent materials into polymeric coatings can covert the absorbed sunlight to fluorescent emissions and hence increase the effective solar reflectance and cooling performance. In this paper, we experimentally investigate the key factors for fluorescence-assisted radiative cooling with TiO2-based white coatings. The surrounding TiO2 nanoparticles, which enable spatial and temporal light confinement through multiple Mie scattering, lead to Purcell enhancement of phosphors in the coating. Photoluminescence lifetimes of two phosphors (BaMgAl10O17:Eu2+ and (Sr, Ba)SiO4:Eu2+) exhibit significant reduction of ~61% and ~23%, indicating Purcell factors of 2.6 and 1.3, respectively. Moreover, smaller Stokes shifts of the phosphors are preferred to further diminish solar absorption. Field test of fluorescent cooling coatings demonstrate an improvement of ~4% solar reflectance for the BaMgAl10O17:Eu2+-based fluorescent cooling coating. However, to maximize solar reflectance, a white appearance is introduced based on multiple Mie scattering by the broad size distribution of fillers, which is visually pressurized and aesthetically bored. Besides, most colored pigments absorb visible light significantly and convert it to non-radiative thermal energy, offsetting the cooling effect. Therefore, current colored cooling coatings are facing the compromise between color saturation and cooling effect. To solve this problem, we introduced colored fluorescent materials into white coating based on SiO2 microspheres as a top layer, covering a white cooling coating based on TiO2. Compared with the colored pigments, fluorescent materials could re-emit the absorbed light, reducing the solar absorption introduced by coloration. Our work investigated the scattering properties of SiO2 dielectric spheres with different diameters and detailly discussed their impact on the PL properties of phosphors, paving the way for colored fluorescent-assisted cooling coting to application and industrialization.Keywords: solar reflection, infrared emissivity, mie scattering, photoluminescent emission, radiative cooling
Procedia PDF Downloads 86365 Mechanisms of Atiulcerogenic Activity of Costus speciosus Rhizome Extract in Ethanol-Induced Gastric Mucosal Injury in Rats
Authors: Somayeh Fani, Mahmood Ameen Abdulla
Abstract:
Costus speciosus is an important Malaysian medicinal plant commonly used traditionally in the treatment of many aliments. The present investigation is designed to elucidate preventive effects of ethanolic extracts of C. speciosus rhizome against absolute ethanol-induced gastric mucosal injury in Sprague-Dawley rats. Five groups of rats were orally pre-treated with vehicle, carboxymethylcellulose (CMC) as normal control group (Group 1), ethanol as ulcer control group (Group 2), omeprazole 20 mg/kg (reference group) (Group 3), and 250 and 500 mg/kg of C. speciosus extract (experimental groups) (Group 4 and 5), respectively. An hour later, CMC was given orally to Group 1 rats and absolute ethanol was given orally to Group 2-5 rats to generate gastric mucosal injury. After an additional hour, the rats were sacrificed. Grossly, ulcer control group exhibited severe of gastric mucosal hemorrhagic injury and increased in ulcer area, whereas groups pre-treated with omeprazole or plant’s rhizomes exhibited the significant reduction of gastric mucosal injury. Significant increase in the pH and mucous of gastric content was observed in rats re-treated with C. speciosus rhizome. Histology, ulcer control rats, demonstrated remarkable disruption of gastric mucosa, increased in edema and inflammatory cells infiltration of submucosal layer compared to rats pre-treated with rhizomes extract. Periodic acid Schiff staining for glycoprotein, rats pre-fed with C. speciosus C. displayed remarkably intense uptake of magenta color by glandular gastric mucosa compared with ulcer control rats. Immunostaining of gastric epithelium, rats pre-treatment with rhizome extract provide evidence of up-regulation of HSP70 and down-regulation of Bax proteins compared to ulcer control animals. Gastric tissue homogenate, C. speciosus significantly increased the activity of superoxide dismutase (SOD), and catalase (CAT), increased the level of non-protein sulfhydryl (NP-SH) and decreased the level of lipid peroxidation after ethanol administration. Acute toxicity test did not show any signs of toxicity. The mechanisms implicated the gasrtoprotective property of C. speciosus depend upon the antisecretory activity, increased in gastric mucus glycoprotein, up-regulation of HSP70 protein and down-regulation of Bax proteins, reduction in the lipid peroxidation and increase in the level of NP-SH and antioxidant enzymes activity in gastic homogenate.Keywords: antioxidant, Costus speciosus, gastric ulcer, histology, omeprazole
Procedia PDF Downloads 307364 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework
Authors: Ayat-Allah Bouramdane
Abstract:
Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability
Procedia PDF Downloads 177363 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 109362 Ethnobotanical Study, Phytochemical Screening, and Biological Activity of Culinary Spices Commonly Used in Ommdurman, Sudan
Authors: Randa M. T. Mohamed
Abstract:
Spices have long been used as traditional ingredients in the kitchen for seasoning, coloring, aromatic and food preservative properties. Besides, spices are equally used for therapeutic purposes. The objective of this study was to survey and document the medicinal properties of spices commonly used in the Sudanese kitchen for different food preparations. Also, extracts from reported spices were screened for the presence of secondary metabolites as well as their antioxidant and beta-lactamase inhibitory properties. This study was conducted in the Rekabbya Quartier in Omdurman, Khartoum State, Sudan. Information was collected by carrying out semi-structured interviews. All informants (30) in the present study were women. Spices were purchased from Attareen shop in Omdurman. Essential oils from spices were extracted by hydrodistillation, and ethanolic extracts by maceration. Phytochemical screening was performed by thin-layer chromatography (TLC). The antioxidant capacity of essential oils and ethanolic extracts was investigated through TLC bioautography. Beta lactamase inhibitory activity was performed by the acidimetric test. Ethnobotany study showed that a total of 16 spices were found to treat 36 ailments belonging to 10 categories. The most frequently claimed medicinal uses were for the digestive system diseases treated by 14 spices and respiratory system diseases treated by 8 spices. Gynecological problems were treated with 4 spices. Dermatological diseases were cured by 5 spices, while infections caused by tapeworms and other microbes causing dysentery were treated by 3 spices. 4 spices were used to treat bad breath, bleeding gum and toothache. Headache, eyes infection, cardiac stimulation and epilepsy were treated with one spice each. Other health problems like fatigue and loss of appetite, and low breast milk production were treated by 1, 3 and 2 spices, respectively. The majority (69%, 11/16) of spices were exported from different countries like India, China, Indonesia, Ethiopia, Egypt and Nigeria, while 31% (5/16) was cultivated in Sudan. Essential oils of all spices were rich in terpenes, while ethanolic extracts contained variable classes of secondary metabolites. Both essential oils and ethanolic extracts of all spices exerted considerable antioxidant activity. Only one extract, Syzygium aromaticum, possessed beta-lactamase inhibitory activity. In conclusion, this study could contribute to conserving information on traditional medicinal uses of spices in Sudan. Also, the results demonstrated the potential of some of these spices to exert beneficial antimicrobial and antioxidant effects. Detailed phytochemical and biological assays of these spices are recommended.Keywords: spices, enthnobotany, antioxidant, betalactamase inhibition
Procedia PDF Downloads 30361 Empowering Youth Through Pesh Poultry: A Transformative Approach to Addressing Unemployment and Fostering Sustainable Livelihoods in Busia District, Uganda
Authors: Bisemiire Anthony,
Abstract:
PESH Poultry is a business project proposed specifically to solve unemployment and income-related problems affecting the youths in the Busia district. The project is intended to transform the life of the youth in terms of economic, social and behavioral, as well as the domestic well-being of the community at large. PESH Poultry is a start-up poultry farm that will be engaged in the keeping of poultry birds, broilers and layers for the production of quality and affordable poultry meat and eggs respectively and other poultry derivatives targeting consumers in eastern Uganda, for example, hotels, restaurants, households and bakeries. We intend to use a semi-intensive system of farming, where water and some food are provided in a separate nighttime shelter for the birds; our location will be in Lumino, Busia district. The poultry project will be established and owned by Bisemiire Anthony, Nandera Patience, Naula Justine, Bwire Benjamin and other investors. The farm will be managed and directed by Nandera Patience, who has five years of work experience and business administration knowledge. We will sell poultry products, including poultry eggs, chicken meat, feathers and poultry manure. We also offer consultancy services for poultry farming. Our eggs and chicken meat are hygienic, rich in protein and high quality. We produce processes and packages to meet the standard organization of Uganda and international standards. The business project shall comprise five (5) workers on the key management team who will share various roles and responsibilities in the identified business functions such as marketing, finance and other related poultry farming activities. PESH Poultry seeks 30 million Ugandan shillings in long-term financing to cover start-up costs, equipment, building expenses and working capital. Funding for the launch of the business will be provided primarily by equity from the investors. The business will reach positive cash flow in its first year of operation, allowing for the expected repayment of its loan obligations. Revenue will top UGX 11,750,000, and net income will reach about UGX115 950,000 in the 1st year of operation. The payback period for our project is 2 years and 3 months. The farm plans on starting with 1000 layer birds and 1000 broiler birds, 20 workers in the first year of operation.Keywords: chicken, pullets, turkey, ducks
Procedia PDF Downloads 95360 Reduced Tillage and Bio-stimulant Application Can Improve Soil Microbial Enzyme Activity in a Dryland Cropping System
Authors: Flackson Tshuma, James Bennett, Pieter Andreas Swanepoel, Johan Labuschagne, Stephan van der Westhuizen, Francis Rayns
Abstract:
Amongst other things, tillage and synthetic agrochemicals can be effective methods of seedbed preparation and pest control. Nonetheless, frequent and intensive tillage and excessive application of synthetic agrochemicals, such as herbicides and insecticides, can reduce soil microbial enzyme activity. A decline in soil microbial enzyme activity can negatively affect nutrient cycling and crop productivity. In this study, the effects of four tillage treatments; continuous mouldboard plough; shallow tine-tillage to a depth of about 75 mm; no-tillage; and tillage rotation (involving shallow tine-tillage once every four years in rotation with three years of no-tillage), and two rates of synthetic agrochemicals (standard: with regular application of synthetic agrochemicals; and reduced: fewer synthetic agrochemicals in combination with bio-chemicals/ or bio-stimulants) on soil microbial enzyme activity were investigated between 2018 and 2020 in a typical Mediterranean climate zone in South Africa. Four different bio-stimulants applied contained: Trichoderma asperellum, fulvic acid, silicic acid, and Nereocystis luetkeana extracts, respectively. The study was laid out as a complete randomised block design with four replicated blocks. Each block had 14 plots, and each plot measured 50 m x 6 m. The study aimed to assess the combined impact of tillage practices and reduced rates of synthetic agrochemical application on soil microbial enzyme activity in a dryland cropping system. It was hypothesised that the application of bio-stimulants in combination with minimum soil disturbance will lead to a greater increase in microbial enzyme activity than the effect of applying either in isolation. Six soil cores were randomly and aseptically collected from each plot for microbial enzyme activity analysis from the 0-150 mm layer of a field trial under a dryland crop rotation system in the Swartland region. The activities of four microbial enzymes, β-glucosidase, acid phosphatase, alkaline phosphatase and urease, were assessed. The enzymes are essential for the cycling of glucose, phosphorus, and nitrogen, respectively. Microbial enzyme activity generally increased with a reduction of both tillage intensity and synthetic agrochemical application. The use of the mouldboard plough led to the least (P<0.05) microbial enzyme activity relative to the reduced tillage treatments, whereas the system with bio-stimulants (reduced synthetic agrochemicals) led to the highest (P<0.05) microbial enzyme activity relative to the standard systems. The application of bio-stimulants in combination with reduced tillage, particularly no-tillage, could be beneficial for enzyme activity in a dryland farming system.Keywords: bio-stimulants, soil microbial enzymes, synthetic agrochemicals, tillage
Procedia PDF Downloads 82359 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations
Authors: Till Gramberg
Abstract:
In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering
Procedia PDF Downloads 82358 Association between G2677T/A MDR1 Polymorphism with the Clinical Response to Disease Modifying Anti-Rheumatic Drugs in Rheumatoid Arthritis
Authors: Alan Ruiz-Padilla, Brando Villalobos-Villalobos, Yeniley Ruiz-Noa, Claudia Mendoza-Macías, Claudia Palafox-Sánchez, Miguel Marín-Rosales, Álvaro Cruz, Rubén Rangel-Salazar
Abstract:
Introduction: In patients with rheumatoid arthritis, resistance or poor response to disease modifying antirheumatic drugs (DMARD) may be a reflection of the increase in g-P. The expression of g-P may be important in mediating the effluence of DMARD from the cell. In addition, P-glycoprotein is involved in the transport of cytokines, IL-1, IL-2 and IL-4, from normal lymphocytes activated to the surrounding extracellular matrix, thus influencing the activity of RA. The involvement of P-glycoprotein in the transmembrane transport of cytokines can serve as a modulator of the efficacy of DMARD. It was shown that a number of lymphocytes with glycoprotein P activity is increased in patients with RA; therefore, P-glycoprotein expression could be related to the activity of RA and could be a predictor of poor response to therapy. Objective: To evaluate in RA patients, if the G2677T/A MDR1 polymorphisms is associated with differences in the rate of therapeutic response to disease-modifying antirheumatic agents in patients with rheumatoid arthritis. Material and Methods: A prospective cohort study was conducted. Fifty seven patients with RA were included. They had an active disease according to DAS-28 (score >3.2). We excluded patients receiving biological agents. All the patients were followed during 6 months in order to identify the rate of therapeutic response according to the American College of Rheumatology (ACR) criteria. At the baseline peripheral blood samples were taken in order to identify the G2677T/A MDR1 polymorphisms using PCR- Specific allele. The fragment was identified by electrophoresis in polyacrylamide gels stained with ethidium bromide. For statistical analysis, the genotypic and allelic frequencies of MDR1 gene polymorphism between responders and non-responders were determined. Chi-square tests as well as, relative risks with 95% confidence intervals (95%CI) were computed to identify differences in the risk for achieving therapeutic response. Results: RA patients had a mean age of 47.33 ± 12.52 years, 87.7% were women with a mean for DAS-28 score of 6.45 ± 1.12. At the 6 months, the rate of therapeutic response was 68.7 %. The observed genotype frequencies were: for G/G 40%, T/T 32%, A/A 19%, G/T 7% and for A/A genotype 2%. Patients with G allele developed at 6 months of treatment, higher rate for therapeutic response assessed by ACR20 compared to patients with others alleles (p=0.039). Conclusions: Patients with G allele of the - G2677T/A MDR1 polymorphisms had a higher rate of therapeutic response at 6 months with DMARD. These preliminary data support the requirement for a deep evaluation of these and other genotypes as factors that may influence the therapeutic response in RA.Keywords: pharmacogenetics, MDR1, P-glycoprotein, therapeutic response, rheumatoid arthritis
Procedia PDF Downloads 208357 Boiler Ash as a Reducer of Formaldehyde Emission in Medium-Density Fiberboard
Authors: Alexsandro Bayestorff da Cunha, Dpebora Caline de Mello, Camila Alves Corrêa
Abstract:
In the production of fiberboards, an adhesive based on urea-formaldehyde resin is used, which has the advantages of low cost, homogeneity of distribution, solubility in water, high reactivity in an acid medium, and high adhesion to wood. On the other hand, as a disadvantage, there is low resistance to humidity and the release of formaldehyde. The objective of the study was to determine the viability of adding industrial boiler ash to the urea formaldehyde-based adhesive for the production of medium-density fiberboard. The raw material used was composed of Pinus spp fibers, urea-formaldehyde resin, paraffin emulsion, ammonium sulfate, and boiler ash. The experimental plan, consisting of 8 treatments, was completely randomized with a factorial arrangement, with 0%, 1%, 3%, and 5% ash added to the adhesive, with and without the application of a catalyst. In each treatment, 4 panels were produced with density of 750 kg.m⁻³, dimensions of 40 x 40 x 1,5 cm, 12% urea formaldehyde resin, 1% paraffin emulsion and hot pressing at a temperature of 180ºC, the pressure of 40 kgf/cm⁻² for a time of 10 minutes. The different compositions of the adhesive were characterized in terms of viscosity, pH, gel time and solids, and the panels by physical and mechanical properties, in addition to evaluation using the IMAL DPX300 X-ray densitometer and formaldehyde emission by the perforator method. The results showed a significant reduction of all adhesive properties with the use of the catalyst, regardless of the treatment; while the percentage increase of ashes provided an increase in the average values of viscosity, gel time, and solids and a reduction in pH for the panels with a catalyst; for panels without catalyst, the behavior was the opposite, with the exception of solids. For the physical properties, the results of the variables of density, compaction ratio, and thickness were equivalent and in accordance with the standard, while the moisture content was significantly reduced with the use of the catalyst but without the influence of the percentage of ash. The density profile for all treatments was characteristic of medium-density fiberboard, with more compacted and dense surfaces when compared to the central layer. For thickness, the swelling was not influenced by the catalyst and the use of ash, presenting average values within the normalized parameters. For mechanical properties, the influence of ashes on the adhesive was negatively observed in the modulus of rupture from 1% and in the traction test from 3%; however, only this last property, in the percentages of 3% and 5%, were below the minimum limit of the norm. The use of catalyst and ashes with percentages of 3% and 5% reduced the formaldehyde emission of the panels; however, only the panels that used adhesive with catalyst presented emissions below 8mg of formaldehyde / 100g of the panel. In this way, it can be said that boiler ash can be added to the adhesive with a catalyst without impairing the technological properties by up to 1%.Keywords: reconstituted wood panels, formaldehyde emission, technological properties of panels, perforator
Procedia PDF Downloads 72356 European Hinterland and Foreland: Impact of Accessibility, Connectivity, Inter-Port Competition on Containerization
Authors: Dial Tassadit Rania, Figueiredo De Oliveira Gabriel
Abstract:
In this paper, we investigate the relationship between ports and their hinterland and foreland environments and the competitive relationship between the ports themselves. These two environments are changing, evolving and introducing new challenges for commercial and economic development at the regional, national and international levels. Because of the rise of the containerization phenomenon, shipping costs and port handling costs have considerably decreased due to economies of scale. The volume of maritime trade has increased substantially and the markets served by the ports have expanded. On these bases, overlapping hinterlands can give rise to the phenomenon of competition between ports. Our main contribution comparing to the existing literature on this issue, is to build a set of hinterland, foreland and competition indicators. Using these indicators? we investigate the effect of hinterland accessibility, foreland connectivity and inter-ports competition on containerized traffic of Europeans ports. For this, we have a 10-year panel database from 2004 to 2014. Our hinterland indicators are given by two indicators of accessibility; they describe the market potential of a port and are calculated using information on population and wealth (GDP). We then calculate population and wealth for different neighborhoods within a distance from a port ranging from 100 to 1000km. For the foreland, we produce two indicators: port connectivity and number of partners for each port. Finally, we compute the two indicators of inter-port competition and a market concentration indicator (Hirshmann-Herfindhal) for different neighborhood-distances around the port. We then apply a fixed-effect model to test the relationship above. Again, with a fixed effects model, we do a sensitivity analysis for each of these indicators to support the results obtained. The econometric results of the general model given by the regression of the accessibility indicators, the LSCI for port i, and the inter-port competition indicator on the containerized traffic of European ports show a positive and significant effect for accessibility to wealth and not to the population. The results are positive and significant for the two indicators of connectivity and competition as well. One of the main results of this research is that the port development given here by the increase of its containerized traffic is strongly related to the development of its hinterland and foreland environment. In addition, it is the market potential, given by the wealth of the hinterland that has an impact on the containerized traffic of a port. However, accessibility to a large population pool is not important for understanding the dynamics of containerized port traffic. Furthermore, in order to continue to develop, a port must penetrate its hinterland at a deep level exceeding 100 km around the port and seek markets beyond this perimeter. The port authorities could focus their marketing efforts on the immediate hinterland, which can, as the results shows, not be captive and thus engage new approaches of port governance to make it more attractive.Keywords: accessibility, connectivity, European containerization, European hinterland and foreland, inter-port competition
Procedia PDF Downloads 195355 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery
Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald
Abstract:
Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography
Procedia PDF Downloads 150354 Ethnobotanical Study, Phytochemical Screening and Biological Activity of Culinary Spices Commonly Used in Ommdurman, Sudan
Authors: Randa M. T. Mohamed
Abstract:
Spices have long been used as traditional ingredients in the kitchen for seasoning, coloring, aromatic and food preservative properties. Besides, spices are equally used for therapeutic purposes. The objective of this study was to survey and document the medicinal properties of spices commonly used in the Sudanese kitchen for different food preparations. Also, extracts from reported spices were screened for the presence of secondary metabolites as well as their antioxidant and beta-lactamase inhibitory properties. This study was conducted in the Rekabbya Quartier in Omdurman, Khartoum State, Sudan. Information was collected by carrying out semi-structured interviews. All informants (30) in the present study were women. Spices were purchased from Attareen shop in Omdurman. Essential oils from spices were extracted by hydrodistillation and ethanolic extracts by maceration. Phytochemical screening was performed by thin layer chromatography (TLC). The antioxidant capacity of essential oils and ethanolic extracts was investigated through TLC bioautography. Beta lactamase inhibitory activity was performed by the acidimetric test. Ethnobotany study showed that a total of 16 spices were found to treat 36 ailments belonging to 10 categories. The most frequently claimed medicinal uses were for the digestive system diseases treated by 14 spices and respiratory system diseases treated by 8 spices. Gynaecological problems were treated by 4 spices. Dermatological diseases were cured by 5 spices while infections caused by tapeworms and other microbes causing dysentery were treated by 3 spices. 4 spices were used to treat bad breath, bleeding gum and toothache. Headache, eyes infection, cardiac stimulation and epilepsy were treated by one spice each. Other health problem like fatigue and loss of appetite and low breast milk production were treated by 1, 3 and 2 spices respectively. The majority (69%, 11/16) of spices were exported from different countries like India, China, Indonesia, Ethiopia, Egypt and Nigeria while 31% (5/16) was cultivated in Sudan. Essential oils of all spices were rich in terpenes while ethanolic extracts contained variable classes of secondary metabolites. Both essential oils and ethanolic extracts of all spices exerted considerable antioxidant activity. Only one extract, Syzygium aromaticum, possessed beta lactamase inhibitory activity. In conclusion, this study could contribute in conserving information on traditional medicinal uses of spices in Sudan. Also, the results demonstrated the potential of some of these spices to exert beneficial antimicrobial and antioxidant effect. Detailed phytochemical and biological assays of these spices are recommended.Keywords: spices, ethnobotany, phytoconstituents, antioxidant, beta lactamase inhibition
Procedia PDF Downloads 79353 Detection of Glyphosate Using Disposable Sensors for Fast, Inexpensive and Reliable Measurements by Electrochemical Technique
Authors: Jafar S. Noori, Jan Romano-deGea, Maria Dimaki, John Mortensen, Winnie E. Svendsen
Abstract:
Pesticides have been intensively used in agriculture to control weeds, insects, fungi, and pest. One of the most commonly used pesticides is glyphosate. Glyphosate has the ability to attach to the soil colloids and degraded by the soil microorganisms. As glyphosate led to the appearance of resistant species, the pesticide was used more intensively. As a consequence of the heavy use of glyphosate, residues of this compound are increasingly observed in food and water. Recent studies reported a direct link between glyphosate and chronic effects such as teratogenic, tumorigenic and hepatorenal effects although the exposure was below the lowest regulatory limit. Today, pesticides are detected in water by complicated and costly manual procedures conducted by highly skilled personnel. It can take up to several days to get an answer regarding the pesticide content in water. An alternative to this demanding procedure is offered by electrochemical measuring techniques. Electrochemistry is an emerging technology that has the potential of identifying and quantifying several compounds in few minutes. It is currently not possible to detect glyphosate directly in water samples, and intensive research is underway to enable direct selective and quantitative detection of glyphosate in water. This study focuses on developing and modifying a sensor chip that has the ability to selectively measure glyphosate and minimize the signal interference from other compounds. The sensor is a silicon-based chip that is fabricated in a cleanroom facility with dimensions of 10×20 mm. The chip is comprised of a three-electrode configuration. The deposited electrodes consist of a 20 nm layer chromium and 200 nm gold. The working electrode is 4 mm in diameter. The working electrodes are modified by creating molecularly imprinted polymers (MIP) using electrodeposition technique that allows the chip to selectively measure glyphosate at low concentrations. The modification included using gold nanoparticles with a diameter of 10 nm functionalized with 4-aminothiophenol. This configuration allows the nanoparticles to bind to the working electrode surface and create the template for the glyphosate. The chip was modified using electrodeposition technique. An initial potential for the identification of glyphosate was estimated to be around -0.2 V. The developed sensor was used on 6 different concentrations and it was able to detect glyphosate down to 0.5 mgL⁻¹. This value is below the accepted pesticide limit of 0.7 mgL⁻¹ set by the US regulation. The current focus is to optimize the functionalizing procedure in order to achieve glyphosate detection at the EU regulatory limit of 0.1 µgL⁻¹. To the best of our knowledge, this is the first attempt to modify miniaturized sensor electrodes with functionalized nanoparticles for glyphosate detection.Keywords: pesticides, glyphosate, rapid, detection, modified, sensor
Procedia PDF Downloads 177352 Use of Cellulosic Fibres in Double Layer Porous Asphalt
Authors: Márcia Afonso, Marisa Dinis-Almeida, Cristina Fael
Abstract:
Climate change, namely precipitation patterns alteration, has led to extreme conditions such as floods and droughts. In turn, excessive construction has led to the waterproofing of the soil, increasing the surface runoff and decreasing the groundwater recharge capacity. The permeable pavements used in areas with low traffic lead to a decrease in the probability of floods peaks occurrence and the sediments reduction and pollutants transport, ensuring rainwater quality improvement. This study aims to evaluate the porous asphalt performance, developed in the laboratory, with addition of cellulosic fibres. One of the main objectives of cellulosic fibres use is to stop binder drainage, preventing its loss during storage and transport. Comparing to the conventional porous asphalt the cellulosic fibres addition improved the porous asphalt performance. The cellulosic fibres allowed the bitumen content increase, enabling retention and better aggregates coating and, consequently, a greater mixture durability. With this solution, it is intended to develop better practices of resilience and adaptation to the extreme climate changes and respond to the sustainability current demands, through the eco-friendly materials use. The mix design was performed for different size aggregates (with fine aggregates – PA1 and with coarse aggregates – PA2). The percentage influence of the fibres to be used was studied. It was observed that overall, the binder drainage decreases as the cellulose fibres percentage increases. It was found that the PA2 mixture obtained most binder drainage relative to PA1 mixture, irrespective of the fibres percentage used. Subsequently, the performance was evaluated through laboratory tests of indirect tensile stiffness modulus, water sensitivity, permeability and permanent deformation. The stiffness modulus for the two mixtures groups (with and without cellulosic fibres) presented very similar values between them. For the water sensitivity test it was observed that porous asphalt containing more fine aggregates are more susceptible to the water presence than mixtures with coarse aggregates. The porous asphalt with coarse aggregates have more air voids which allow water to pass easily leading to ITSR higher values. In the permeability test was observed that asphalt porous without cellulosic fibres presented had lower permeability than asphalt porous with cellulosic fibres. The resistance to permanent deformation results indicates better behaviour of porous asphalt with cellulosic fibres, verifying a bigger rut depth in porous asphalt without cellulosic fibres. In this study, it was observed that porous asphalt with bitumen higher percentages improve the performance to permanent deformation. This fact was only possible due to the bitumen retention by the cellulosic fibres.Keywords: binder drainage, cellulosic fibres, permanent deformation, porous asphalt
Procedia PDF Downloads 228351 Application of Geosynthetics for the Recovery of Located Road on Geological Failure
Authors: Rideci Farias, Haroldo Paranhos
Abstract:
The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure
Procedia PDF Downloads 170350 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 108349 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 74348 The Significance of Urban Space in Death Trilogy of Alejandro González Iñárritu
Authors: Marta Kaprzyk
Abstract:
The cinema of Alejandro González Iñárritu hasn’t been subjected to a lot of detailed analysis yet, what makes it an exceptionally interesting research material. The purpose of this presentation is to discuss the significance of urban space in three films of this Mexican director, that forms Death Trilogy: ‘Amores Perros’ (2000), ‘21 Grams’ (2003) and ‘Babel’ (2006). The fact that in the aforementioned movies the urban space itself becomes an additional protagonist with its own identity, psychology and the ability to transform and affect other characters, in itself warrants for independent research and analysis. Independently, such mode of presenting urban space has another function; it enables the director to complement the rest of characters. The basis for methodology of this description of cinematographic space is to treat its visual layer as a point of departure for a detailed analysis. At the same time, the analysis itself will be supported by recognised academic theories concerning special issues, which are transformed here into essential tools necessary to describe the world (mise-en-scène) created by González Iñárritu. In ‘Amores perros’ the Mexico City serves as a scenery – a place full of contradictions- in the movie depicted as a modern conglomerate and an urban jungle, as well as a labyrinth of poverty and violence. In this work stylistic tropes can be found in an intertextual dialogue of the director with photographies of Nan Goldin and Mary Ellen Mark. The story recounted in ‘21 Grams’, the most tragic piece in the trilogy, is characterised by almost hyperrealistic sadism. It takes place in Memphis, which on the screen turns into an impersonal formation full of heterotopias described by Michel Foucault and non-places, as defined by Marc Augé in his essay. By contrast, the main urban space in ‘Babel’ is Tokio, which seems to perfectly correspond with the image of places discussed by Juhani Pallasmaa in his works concerning the reception of the architecture by ‘pathological senses’ in the modern (or, even more adequately, postmodern) world. It’s portrayed as a city full of buildings that look so surreal, that they seem to be completely unsuitable for the humans to move between them. Ultimately, the aim of this paper is to demonstrate the coherence of the manner in which González Iñárritu designs urban spaces in his Death Trilogy. In particular, the author attempts to examine the imperative role of the cities that form three specific microcosms in which the protagonists of the Mexican director live their overwhelming tragedies.Keywords: cinematographic space, Death Trilogy, film Studies, González Iñárritu Alejandro, urban space
Procedia PDF Downloads 333347 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 208346 Effects of Learner-Content Interaction Activities on the Context of Verbal Learning Outcomes in Interactive Courses
Authors: Alper Tolga Kumtepe, Erdem Erdogdu, M. Recep Okur, Eda Kaypak, Ozlem Kaya, Serap Ugur, Deniz Dincer, Hakan Yildirim
Abstract:
Interaction is one of the most important components of open and distance learning. According to Moore, who proposed one of the keystones on interaction types, there are three basic types of interaction: learner-teacher, learner-content, and learner-learner. From these interaction types, learner-content interaction, without doubt, can be identified as the most fundamental one on which all education is based. Efficacy, efficiency, and attraction of open and distance learning systems can be achieved by the practice of effective learner-content interaction. With the development of new technologies, interactive e-learning materials have been commonly used as a resource in open and distance learning, along with the printed books. The intellectual engagement of the learners with the content that is course materials may also affect their satisfaction for the open and distance learning practices in general. Learner satisfaction holds an important place in open and distance learning since it will eventually contribute to the achievement of learning outcomes. Using the learner-content interaction activities in course materials, Anadolu University, by its Open Education system, tries to involve learners in deep and meaningful learning practices. Especially, during the e-learning material design and production processes, identifying appropriate learner-content interaction activities within the context of learning outcomes holds a big importance. Considering the lack of studies adopting this approach, as well as its being a study on the use of e-learning materials in Open Education system, this research holds a big value in open and distance learning literature. In this respect, the present study aimed to investigate a) which learner-content interaction activities included in interactive courses are the most effective in learners’ achievement of verbal information learning outcomes and b) to what extent distance learners are satisfied with these learner-content interaction activities. For this study, the quasi-experimental research design was adopted. The 120 participants of the study were from Anadolu University Open Education Faculty students living in Eskişehir. The students were divided into 6 groups randomly. While 5 of these groups received different learner-content interaction activities as a part of the experiment, the other group served as the control group. The data were collected mainly through two instruments: pre-test and post-test. In addition to those tests, learners’ perceived learning was assessed with an item at the end of the program. The data collected from pre-test and post-test were analyzed by ANOVA, and in the light of the findings of this approximately 24-month study, suggestions for the further design of e-learning materials within the context of learner-content interaction activities will be provided at the conference. The current study is planned to be an antecedent for the following studies that will examine the effects of activities on other learning domains.Keywords: interaction, distance education, interactivity, online courses
Procedia PDF Downloads 194345 The Language of Science in Higher Education: Related Topics and Discussions
Authors: Gurjeet Singh, Harinder Singh
Abstract:
In this paper, we present "The Language of Science in Higher Education: Related Questions and Discussions". Linguists have written and researched in depth the role of language in science. On this basis, it is clear that language is not just a medium or vehicle for communicating knowledge and ideas. Nor are there mere signs of language knowledge and conversion of ideas into code. In the process of reading and writing, everyone thinks deeply and struggles to understand concepts and make sense. Linguistics play an important role in achieving concepts. In the context of such linguistic diversity, there is no straightforward and simple answer to the question of which language should be the language of advanced science and technology. Many important topics related to this issue are as follows: Involvement in practical or Deep theoretical issues. Languages for the study of science and other subjects. Language issues of science to be considered separate from the development of science, capitalism, colonial history, the worldview of the common man. The democratization of science and technology education in India is possible only by providing maximum reading/resource material in regional languages. The scientific research should be increase to chances of understanding the subject. Multilingual instead or monolingual. As far as deepening the understanding of the subject is concerned, we can shed light on it based on two or three experiences. An attempt was made to make the famous sociological journal Economic and Political Weekly Hindi almost three decades ago. There were many obstacles in this work. The original articles written in Hindi were not found, and the papers and articles of the English Journal were translated into Hindi, and a journal called Sancha was taken out. Equally important is the democratization of knowledge and the deepening of understanding of the subject. However, the question is that if higher education in science is in Hindi or other languages, then it would be a problem to get job. In fact, since independence, English has been dominant in almost every field except literature. There are historical reasons for this, which cannot be reversed. As mentioned above, due to colonial rule, even before independence, English was established as a language of communication, the language of power/status, the language of higher education, the language of administration, and the language of scholarly discourse. After independence, attempts to make Hindi or Hindustani the national language in India were unsuccessful. Given this history and current reality, higher education should be multilingual or at least bilingual. Translation limits should also be increased for those who choose the material for translation. Writing in regional languages on science, making knowledge of various international languages available in Indian languages, etc., is equally important for all to have opportunities to learn English.Keywords: language, linguistics, literature, culture, ethnography, punjabi, gurmukhi, higher education
Procedia PDF Downloads 91344 Human Interaction Skills and Employability in Courses with Internships: Report of a Decade of Success in Information Technology
Authors: Filomena Lopes, Miguel Magalhaes, Carla Santos Pereira, Natercia Durao, Cristina Costa-Lobo
Abstract:
The option to implement curricular internships with undergraduate students is a pedagogical option with some good results perceived by academic staff, employers, and among graduates in general and IT (Information Technology) in particular. Knowing that this type of exercise has never been so relevant, as one tries to give meaning to the future in a landscape of rapid and deep changes. We have as an example the potential disruptive impact on the jobs of advances in robotics, artificial intelligence and 3-D printing, which is a focus of fierce debate. It is in this context that more and more students and employers engage in the pursuit of career-promoting responses and business development, making their investment decisions of training and hiring. Three decades of experience and research in computer science degree and in information systems technologies degree at the Portucalense University, Portuguese private university, has provided strong evidence of its advantages. The Human Interaction Skills development as well as the attractiveness of such experiences for students are topics assumed as core in the Ccnception and management of the activities implemented in these study cycles. The objective of this paper is to gather evidence of the Human Interaction Skills explained and valued within the curriculum internship experiences of IT students employability. Data collection was based on the application of questionnaire to intern counselors and to students who have completed internships in these undergraduate courses in the last decade. The trainee supervisor, responsible for monitoring the performance of IT students in the evolution of traineeship activities, evaluates the following Human Interaction Skills: Motivation and interest in the activities developed, interpersonal relationship, cooperation in company activities, assiduity, ease of knowledge apprehension, Compliance with norms, insertion in the work environment, productivity, initiative, ability to take responsibility, creativity in proposing solutions, and self-confidence. The results show that these undergraduate courses promote the development of Human Interaction Skills and that these students, once they finish their degree, are able to initiate remunerated work functions, mainly by invitation of the institutions in which they perform curricular internships. Findings obtained from the present study contribute to widen the analysis of its effectiveness in terms of future research and actions in regard to the transition from Higher Education pathways to the Labour Market.Keywords: human interaction skills, employability, internships, information technology, higher education
Procedia PDF Downloads 289343 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 484342 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform
Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy
Abstract:
A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing
Procedia PDF Downloads 172341 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 112340 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 114