Search results for: optimal search
471 Melancholia, Nostalgia: Bernardo Soares after Fernando Pessoa
Authors: Maria de Fátima Lambert
Abstract:
Bernardo Soares is one of Fernando Pessoa' several heteronyms (and "half-heteronyms"). Perhaps the one that brought together the majority of his qualities and characters of self-identity within the famous inner-persona-alter-diversity. The Book of Disquiet by Bernardo Soares was released in 1983, consisting of ontological remarks caught by an obsessive inquiring about self-existence. The book became a highly valuable substance when focusing upon the philosophical grounds of Pessoa's aesthetics. For sure, we cannot consider a single aesthetic, admitting that each heteronym has its own particular one, developed after different principles and convictions. Regarding Bernardo Soares, his thought arises from sequenced self-clues expressing peculiar existential doubtless presented as certainties -and vice-versa. His written self-search-images are reported, molding the painful awareness of existence through the discredited tolerance of any conclusive dialogue with others. Given the nature of Soares’ [maybe] unfinished writings, it is obvious that he headed far from his self-insurance-capsule: the office, bedroom, or even the walkscapes through Lisbon. The idea of travel/journey is one of the most relevant when recognizing his profound - although undercover - anguish as melancholy and nostalgia. In Bernardo Soares, Aesthetics is taken agonizingly, grounded upon discreet poetic phraseology and terms. His poetical awareness developed compulsive titles such "Aesthetics of Indifference", "Aesthetics of Discouragement". Soares' Aesthetics emerges directly from oneself, understanding art as inner acts and living experienced issues. Art is not freed from the intellectual expression of oneself emotions. The Disquiet Book is an existential nightmare nourished by everyday life, single written thoughts, balanced by melancholia, nostalgia, and distress. One might wonder if it was dreams that guided his fictional literary persona or the narrow facts of life itself. Along with his endless disquiet writing, Pessoa’s semi-heteronymous traveled without physically going anywhere. The complexity of inner existence is fulfilled by lonely mental walks and travels, as in two texts titled The Never Accomplished Journey. Although we also can consider other fragments, these are the deepest reflections about travelling. Let’s recall that Fernando Pessoa’s ortonyms writings -poems and essays- also addressed this issue from a philosophical perspective. We believe that this theme is one of the meaningful concepts for featuring the main principles of his aesthetics. As we know, Fernando Pessoa did not travel to foreign countries (or in Portugal), except for the journey, with his family, from Lisbon to South Africa (as a child) and, some years later, the return back to Lisbon. One may wonder why the poet never undertook other journeys. Maybe due to a disbelief in moving away from his comfort zone or due to the fear of becoming addicted to endless travels and the loss of his convenient self-closeness. In The Book of Disquiet, the poet shared his internal visions of the outer world but mainly visualizing his deepest enigmas and experiences -so strongly incorporated into reality and fiction.Keywords: aesthetic principles, Bernardo Soares, Fernando Pessoa , melancholia, nostalgia, non-accomplished travel, The Book of Disquiet
Procedia PDF Downloads 130470 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil
Authors: Thiago R. Pereira
Abstract:
The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary
Procedia PDF Downloads 350469 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays
Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir
Abstract:
Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis
Procedia PDF Downloads 113468 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation
Authors: Onur Ozveri, Korkut Karabag, Cagri Keles
Abstract:
It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance
Procedia PDF Downloads 194467 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation
Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode
Abstract:
The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.Keywords: construction project management, construction performance, incident analysis, systems thinking
Procedia PDF Downloads 128466 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done
Authors: Geir Kirkebøen
Abstract:
Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis
Procedia PDF Downloads 329465 Molecular Characterization of Two Thermoplastic Biopolymer-Degrading Fungi Utilizing rRNA-Based Technology
Authors: Nuha Mansour Alhazmi, Magda Mohamed Aly, Fardus M. Bokhari, Ahmed Bahieldin, Sherif Edris
Abstract:
Out of 30 fungal isolates, 2 new isolates were proven to degrade poly-β-hydroxybutyrate (PHB). Enzyme assay for these isolates indicated the optimal environmental conditions required for depolymerase enzyme to induce the highest level of biopolymer degradation. The two isolates were basically characterized at the morphological level as Trichoderma asperellum (isolate S1), and Aspergillus fumigates (isolate S2) using standard approaches. The aim of the present study was to characterize these two isolates at the molecular level based on the highly diverged rRNA gene(s). Within this gene, two domains of the ribosome large subunit (LSU) namely internal transcribed spacer (ITS) and 26S were utilized in the analysis. The first domain comprises the ITS1/5.8S/ITS2 regions ( > 500 bp), while the second domain comprises the D1/D2/D3 regions ( > 1200 bp). Sanger sequencing was conducted at Macrogen (Inc.) for the two isolates using primers ITS1/ITS4 for the first domain, while primers LROR/LR7 for the second domain. Sizes of the first domain ranged between 594-602 bp for S1 isolate and 581-594 bp for S2 isolate, while those of the second domain ranged between 1228-1238 bp for S1 isolate and 1156-1291 for S2 isolate. BLAST analysis indicated 99% identities of the first domain of S1 isolate with T. asperellum isolates XP22 (ID: KX664456.1), CTCCSJ-G-HB40564 (ID: KY750349.1), CTCCSJ-F-ZY40590 (ID: KY750362.1) and TV (ID: KU341015.1). BLAST of the first domain of S2 isolate indicated 100% identities with A. fumigatus isolate YNCA0338 (ID: KP068684.1) and strain MEF-Cr-6 (ID: KU597198.1), while 99% identities with A. fumigatus isolate CCA101 (ID: KT877346.1) and strain CD1621 (ID: JX092088.1). Large numbers of other T. asperellum and A. fumigatus isolates and strains showed high level of identities with S1 and S2 isolates, respectively, based on the diversity of the first domain. BLAST of the second domain of S1 isolate indicated 99 and 100% identities with only two strains of T. asperellum namely TR 3 (ID: HM466685.1) and G (ID: KF723005.1), respectively. However, other T. species (ex., atroviride, hamatum, deliquescens, harzianum, etc.) also showed high level of identities. BLAST of the second domain of S2 isolate indicated 100% identities with A. fumigatus isolate YNCA0338 (ID: KP068684.1) and strain MEF-Cr-6 (ID: KU597198.1), while 99% identities with A. fumigatus isolate CCA101 (ID: KT877346.1) and strain CD1621 (ID: JX092088.1). Large numbers of other A. fumigatus isolates and strains showed high level of identities with S2 isolate. Overall, the results of molecular characterization based on rRNA diversity for the two isolates of T. asperellum and A. fumigatus matched those obtained by morphological characterization. In addition, ITS domain proved to be more sensitive than 26S domain in diversity profiling of fungi at the species level.Keywords: Aspergillus fumigates, Trichoderma asperellum, PHB, degradation, BLAST, ITS, 26S, rRNA
Procedia PDF Downloads 159464 The Emergence of Memory at the Nanoscale
Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann
Abstract:
Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.Keywords: memories, memdevices, memristors, nonequilibrium states
Procedia PDF Downloads 97463 Leadership Lessons from Female Executives in the South African Oil Industry
Authors: Anthea Carol Nefdt
Abstract:
In this article, observations are drawn from a number of interviews conducted with female executives in the South African Oil Industry in 2017. Globally, the oil industry represents one of the most male-dominated organisational structures as well as cultures in the business world. Some of the remarkable women, who hold upper management positions, have not only emerged from the science and finance spheres (equally gendered organisations) but also navigated their way through an aggressive, patriarchal atmosphere of rivalry and competition. We examine various mythology associated with the industry, such as the cowboy myth, the frontier ideology and the queen bee syndrome directed at female executives. One of the themes to emerge from my interviews was the almost unanimous rejection of the ‘glass ceiling’ metaphor favoured by some Feminists. The women of the oil industry rather affirmed a picture of their rise to leadership positions through a strategic labyrinth of challenges and obstacles both in terms of gender and race. This article aims to share the insights of women leaders in a complex industry through both their reflections and a theoretical Feminist lens. The study is located within the South African context and given our historical legacy, it was optimal to use an intersectional approach which would allow issues of race, gender, ethnicity and language to emerge. A qualitative research methodological approach was employed as well as a thematic interpretative analysis to analyse and interpret the data. This research methodology was used precisely because it encourages and acknowledged the experiences women have and places these experiences at the centre of the research. Multiple methods of recruitment of the research participants was utilised. The initial method of recruitment was snowballing sampling, the second method used was purposive sampling. In addition to this, semi-structured interviews gave the participants an opportunity to ask questions, add information and have discussions on issues or aspects of the research area which was of interest to them. One of the key objectives of the study was to investigate if there was a difference in the leadership styles of men and women. Findings show that despite the wealth of literature on the topic, to the contrary some women do not perceive a significant difference in men and women’s leadership style. However other respondents felt that there were some important differences in the experiences of men and women superiors although they hesitated to generalise from these experiences Further findings suggest that although the oil industry provides unique challenges to women as a gendered organization, it also incorporates various progressive initiatives for their advancement.Keywords: petroleum industry, gender, feminism, leadership
Procedia PDF Downloads 157462 How Whatsappization of the Chatbot Affects User Satisfaction, Trust, and Acceptance in a Drive-Sharing Task
Authors: Nirit Gavish, Rotem Halutz, Liad Neta
Abstract:
Nowadays, chatbots are gaining more and more attention due to the advent of large language models. One of the important considerations in chatbot design is how to create an interface to achieve high user satisfaction, trust, and acceptance. Since WhatsApp conversations sometimes substitute for face-to-face communication, we studied whether WhatsAppization of the chatbot -making the conversation resemble a WhatsApp conversation more- will improve user satisfaction, trust, and acceptance, or whether the opposite will occur due to the Uncanny Valley (UV) effect. The task was a drive-sharing task, in which participants communicated with a textual chatbot via WhatsApp and could decide whether to participate in a ride to college with a driver suggested by the chatbot. WhatsAppization of the chatbot was done in two ways: By a dialog-style conversation (Dialog versus No Dialog), and by adding WhatsApp indicators – “Last Seen”, “Connected”, “Read Receipts”, and “Typing…” (Indicators versus No Indicators). Our 120 participants were randomly assigned to one of the four 2 by 2 design groups, with 30 participants in each. They interacted with the WhatsApp chatbot and then filled out a questionnaire. The results demonstrated that, as expected from the manipulation, the interaction with the chatbot was longer for the dialog condition compared to the no dialog. This extra interaction, however, did not lead to higher acceptance -quite the opposite, since participants in the dialog condition were less willing to implement the decision made at the end of the conversation with the chatbot and continue the interaction with the driver they chose. The results are even more striking when considering the Indicators condition. Both for the satisfaction measures and the trust measures, participants’ ratings were lower in the Indicators condition compared to the No Indicators. Participants in the Indicators condition felt that the ride search process was harder to operate, and slower (even though the actual interaction time was similar). They were less convinced that the chatbot suggested real trips and they trusted the person offering the ride and referred to them by the chatbot less. These effects were more evident for participants who preferred to share their rides using WhatsApp compared to participants who preferred chatbots for that purpose. Considering our findings, we can say that the WhatsAppization of the chatbot was detrimental. This is true for the both chatbot WhatsAppization methods – by making the conversation more a dialog and adding WhatsApp indicators. For the chosen drive-sharing task, the results were, in addition to lower satisfaction, less trust in the chatbot’s suggestion and even in the driver suggested by the chatbot, and lower willingness to actually undertake the suggested ride. In addition, it seems that the most problematic WhatsAppization method was using WhatsApp’s indicators during the interaction with the chatbot. The current study suggests that a conversation with an artificial agent should also not imitate a WhatsApp conversation very closely. With the proliferation of WhatsApp use, the emotional and social aspect of face-to face commination are moving to WhatsApp communication. Based on the current study’s findings, it is possible that the UV effect also occurs in WhatsAppization, and not only in humanization, of the chatbot, with a similar feeling of eeriness, and is more pronounced for people who prefer to use WhatsApp over chatbots. The current research can serve as a starting point to study the very interesting and important topic of chatbots WhatsAppization. More methods of WhatsAppization and other tasks could be the focus of further studies.Keywords: chatbot, WhatsApp, humanization, Uncanny Valley, drive sharing
Procedia PDF Downloads 48461 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka
Authors: Sakshi Dhumale, Madhushree C., Amba Shetty
Abstract:
The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability
Procedia PDF Downloads 57460 Assessing the Effectiveness of Warehousing Facility Management: The Case of Mantrac Ghana Limited
Authors: Kuhorfah Emmanuel Mawuli
Abstract:
Generally, for firms to enhance their operational efficiency of logistics, it is imperative to assess the logistics function. The cost of logistics conventionally represents a key consideration in the pricing decisions of firms, which suggests that cost efficiency in logistics can go a long way to improve margins. Warehousing, which is a key part of logistics operations, has the prospect of influencing operational efficiency in logistics management as well as customer value, but this potential has often not been recognized. It has been found that there is a paucity of research that evaluates the efficiency of warehouses. Indeed, limited research has been conducted to examine potential barriers to effective warehousing management. Due to this paucity of research, there is limited knowledge on how to address the obstacles associated with warehousing management. In order for warehousing management to become profitable, there is the need to integrate, balance, and manage the economic inputs and outputs of the entire warehouse operations, something that many firms tend to ignore. Management of warehousing is not solely related to storage functions. Instead, effective warehousing management requires such practices as maximum possible mechanization and automation of operations, optimal use of space and capacity of storage facilities, organization through "continuous flow" of goods, a planned system of storage operations, and safety of goods. For example, there is an important need for space utilization of the warehouse surface as it is a good way to evaluate the storing operation and pick items per hour. In the setting of Mantrac Ghana, not much knowledge regarding the management of the warehouses exists. The researcher has personally observed many gaps in the management of the warehouse facilities in the case organization Mantrac Ghana. It is important, therefore, to assess the warehouse facility management of the case company with the objective of identifying weaknesses for improvement. The study employs an in-depth qualitative research approach using interviews as a mode of data collection. Respondents in the study mainly comprised warehouse facility managers in the studied company. A total of 10 participants were selected for the study using a purposive sampling strategy. Results emanating from the study demonstrate limited warehousing effectiveness in the case company. Findings further reveal that the major barriers to effective warehousing facility management comprise poor layout, poor picking optimization, labour costs, and inaccurate orders; policy implications of the study findings are finally outlined.Keywords: assessing, warehousing, facility, management
Procedia PDF Downloads 65459 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis
Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin
Abstract:
Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis
Procedia PDF Downloads 263458 Revisiting Historical Illustrations in the Age of Digital Anatomy Education
Authors: Julia Wimmers-Klick
Abstract:
In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.Keywords: human anatomy, historical illustrations, historical context, medical education
Procedia PDF Downloads 21457 Common Caper (Capparis Spinosa L.) From Oblivion and Neglect to the Interface of Medicinal Plants
Authors: Ahmad Alsheikh Kaddour
Abstract:
Herbal medicine has been a long-standing phenomenon in Arab countries since ancient times because of its breadth and moderate temperament. Therefore, it possesses a vast natural and economic wealth of medicinal and aromatic herbs. This prompted ancient Egyptians and Arabs to discover and exploit them. The economic importance of the plant is not only from medicinal uses; it is a plant of high economic value for its various uses, especially in food, cosmetic and aromatic industries. It is also an ornamental plant and soil stabilization. The main objective of this research is to study the chemical changes that occur in the plant during the growth period, as well as the production of plant buds, which were previously considered unwanted plants. The research was carried out in the period 2021-2022 in the valley of Al-Shaflah (common caper), located in Qumhana village, 7 km north of Hama Governorate, Syria. The results of the research showed a change in the percentage of chemical components in the plant parts. The ratio of protein content and the percentage of fatty substances in fruits and the ratio of oil in the seeds until the period of harvesting of these plant parts improved, but the percentage of essential oils decreased with the progress of the plant growth, while the Glycosides content where improved with the plant aging. The production of buds is small, with dimensions as 0.5×0.5 cm, which is preferred for commercial markets, harvested every 2-3 days in quantities ranging from 0.4 to 0.5 kg in one cut/shrubs with 3 years’ age as average for the years 2021-2022. The monthly production of a shrub is between 4-5 kg per month. The productive period is 4 months approximately. This means that the seasonal production of one plant is 16-20 kg and the production of 16-20 tons per year with a plant density of 1,000 shrubs per hectare, which is the optimum rate of cultivation in the unit of mass, given the price of a kg of these buds is equivalent to 1 US $; however, this means that the annual output value of the locally produced hectare ranges from 16,000 US $ to 20,000 US $ for farmers. The results showed that it is possible to transform the cultivation of this plant from traditional random to typical areas cultivation, with a plant density of 1,000-1,100 plants per hectare according to the type of soil to obtain production of medicinal and nutritious buds, as well as, the need to pay attention to this national wealth and invest in the optimal manner, which leads to the acquisition of hard currency through export to support the national income.Keywords: common caper, medicinal plants, propagation, medical, economic importance
Procedia PDF Downloads 72456 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob
Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez
Abstract:
World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound
Procedia PDF Downloads 267455 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory
Authors: Fouzia Brihmat
Abstract:
Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost
Procedia PDF Downloads 88454 Kinetic Modelling of Fermented Probiotic Beverage from Enzymatically Extracted Annona Muricata Fruit
Authors: Calister Wingang Makebe, Wilson Ambindei Agwanande, Emmanuel Jong Nso, P. Nisha
Abstract:
Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1 as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated, and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics
Procedia PDF Downloads 80453 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition
Authors: Karolina Mazur, Stanislaw Kuciel
Abstract:
Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.Keywords: biocomposites, PLA, PHA, storage
Procedia PDF Downloads 265452 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 340451 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles
Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan
Abstract:
Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks
Procedia PDF Downloads 54450 Just a Heads Up: Approach to Head Shape Abnormalities
Authors: Noreen Pulte
Abstract:
Prior to the 'Back to Sleep' Campaign in 1992, 1 of every 300 infants seen by Advanced Practice Providers had plagiocephaly. Insufficient attention is given to plagiocephaly and brachycephaly diagnoses in practice and pediatric education. In this talk, Nurse Practitioners and Pediatric Providers will be able to: (1) identify red flags associated with head shape abnormalities, (2) learn techniques they can teach parents to prevent head shape abnormalities, and (3) differentiate between plagiocephaly, brachycephaly, and craniosynostosis. The presenter is a Primary Care Pediatric Nurse Practitioner at Ann & Robert H. Lurie Children's Hospital of Chicago and the primary provider for its head shape abnormality clinics. She will help participants translate key information obtained from birth history, review of systems, and developmental history to understand risk factors for head shape abnormalities and progression of deformities. Synostotic and non-synostotic head shapes will be explained to help participants differentiate plagiocephaly and brachycephaly from synostotic head shapes. This knowledge is critical for the prompt referral of infants with craniosynostosis for surgical evaluation and correction. Rapid referral for craniosynostosis can possibly direct the patient to a minimally invasive surgical procedure versus a craniectomy. As for plagiocephaly and brachycephaly, this timely referral can also aid in a physical therapy referral if necessitated, which treats torticollis and aids in improving head shape. A well-timed referral to a head shape clinic can possibly eliminate the need for a helmet and/or minimize the time in a helmet. Practitioners will learn the importance of obtaining head measurements using calipers. The presenter will explain head calculations and how the calculations are interpreted to determine the severity of the head shape abnormalities. Severity defines the treatment plan. Participants will learn when to refer patients to a head shape abnormality clinic and techniques they should teach parents to perform while waiting for the referral appointment. The purpose, mechanics, and logistics of helmet therapy, including optimal time to initiate helmet therapy, recommended helmet wear-time, and tips for helmet therapy compliance, will be described. Case scenarios will be incorporated into the presenter's presentation to support learning. The salient points of the case studies will be explained and discussed. Practitioners will be able to immediately translate the knowledge and skills gained in this presentation into their clinical practice.Keywords: plagiocephaly, brachycephaly, craniosynostosis, red flags
Procedia PDF Downloads 95449 Probing Scientific Literature Metadata in Search for Climate Services in African Cities
Authors: Zohra Mhedhbi, Meheret Gaston, Sinda Haoues-Jouve, Julia Hidalgo, Pierre Mazzega
Abstract:
In the current context of climate change, supporting national and local stakeholders to make climate-smart decisions is necessary but still underdeveloped in many countries. To overcome this problem, the Global Frameworks for Climate Services (GFCS), implemented under the aegis of the United Nations in 2012, has initiated many programs in different countries. The GFCS contributes to the development of Climate Services, an instrument based on the production and transfer of scientific climate knowledge for specific users such as citizens, urban planning actors, or agricultural professionals. As cities concentrate on economic, social and environmental issues that make them more vulnerable to climate change, the New Urban Agenda (NUA), adopted at Habitat III in October 2016, highlights the importance of paying particular attention to disaster risk management, climate and environmental sustainability and urban resilience. In order to support the implementation of the NUA, the World Meteorological Organization (WMO) has identified the urban dimension as one of its priorities and has proposed a new tool, the Integrated Urban Services (IUS), for more sustainable and resilient cities. In the southern countries, there’s a lack of development of climate services, which can be partially explained by problems related to their economic financing. In addition, it is often difficult to make climate change a priority in urban planning, given the more traditional urban challenges these countries face, such as massive poverty, high population growth, etc. Climate services and Integrated Urban Services, particularly in African cities, are expected to contribute to the sustainable development of cities. These tools will help promoting the acquisition of meteorological and socio-ecological data on their transformations, encouraging coordination between national or local institutions providing various sectoral urban services, and should contribute to the achievement of the objectives defined by the United Nations Framework Convention on Climate Change (UNFCCC) or the Paris Agreement, and the Sustainable Development Goals. To assess the state of the art on these various points, the Web of Science metadatabase is queried. With a query combining the keywords "climate*" and "urban*", more than 24,000 articles are identified, source of more than 40,000 distinct keywords (but including synonyms and acronyms) which finely mesh the conceptual field of research. The occurrence of one or more names of the 514 African cities of more than 100,000 inhabitants or countries, reduces this base to a smaller corpus of about 1410 articles (2990 keywords). 41 countries and 136 African cities are cited. The lexicometric analysis of the metadata of the articles and the analysis of the structural indicators (various centralities) of the networks induced by the co-occurrence of expressions related more specifically to climate services show the development potential of these services, identify the gaps which remain to be filled for their implementation and allow to compare the diversity of national and regional situations with regard to these services.Keywords: African cities, climate change, climate services, integrated urban services, lexicometry, networks, urban planning, web of science
Procedia PDF Downloads 195448 Profiling of the Cell-Cycle Related Genes in Response to Efavirenz, a Non-Nucleoside Reverse Transcriptase Inhibitor in Human Lung Cancer
Authors: Rahaba Marima, Clement Penny
Abstract:
The Health-related quality of life (HRQoL) for HIV positive patients has improved since the introduction of the highly active antiretroviral treatment (HAART). However, in the present HAART era, HIV co-morbidities such as lung cancer, a non-AIDS (NAIDS) defining cancer have been documented to be on the rise. Under normal physiological conditions, cells grow, repair and proliferate through the cell-cycle as cellular homeostasis is important in the maintenance and proper regulation of tissues and organs. Contrarily, the deregulation of the cell-cycle is a hallmark of cancer, including lung cancer. The association between lung cancer and the use of HAART components such as Efavirenz (EFV) is poorly understood. This study aimed at elucidating the effects of EFV on the cell-cycle genes’ expression in lung cancer. For this purpose, the human cell-cycle gene array composed of 84 genes was evaluated on both normal lung fibroblasts (MRC-5) cells and adenocarcinoma (A549) lung cells, in response to 13µM EFV or 0.01% vehicle. The ±2 up or down fold change was used as a basis of target selection, with p < 0.05. Additionally, RT-qPCR was done to validate the gene array results. Next, In-silico bio-informatics tools, Search Tool for the Retrieval of Interacting Genes/Proteins (STRING), Reactome, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and Ingenuity Pathway Analysis (IPA) were used for gene/gene interaction studies as well as to map the molecular and biological pathways influenced by the identified targets. Interestingly, the DNA damage response (DDR) pathway genes such as p53, Ataxia telangiectasia mutated and Rad3 related (ATR), Growth arrest and DNA damage inducible alpha (GADD45A), HUS1 checkpoint homolog (HUS1) and Role of radiation (RAD) genes were shown to be upregulated following EFV treatment, as revealed by STRING analysis. Additionally, functional enrichment analysis by the KEGG pathway revealed that most of the differentially expressed gene targets function at the cell-cycle checkpoint such as p21, Aurora kinase B (AURKB) and Mitotic Arrest Deficient-Like 2 (MAD2L2). Core analysis by IPA revealed that p53 downstream targets such as survivin, Bcl2, and cyclin/cyclin dependent kinases (CDKs) complexes are down-regulated, following exposure to EFV. Furthermore, Reactome analysis showed a significant increase in cellular response to stress genes, DNA repair genes, and apoptosis genes, as observed in both normal and cancerous cells. These findings implicate the genotoxic effects of EFV on lung cells, provoking the DDR pathway. Notably, the constitutive expression of this pathway (DDR) often leads to uncontrolled cell proliferation and eventually tumourigenesis, which could be the attribute of HAART components’ (such as EFV) effect on human cancers. Targeting the cell-cycle and its regulation holds a promising therapeutic intervention to the potential HAART associated carcinogenesis, particularly lung cancer.Keywords: cell-cycle, DNA damage response, Efavirenz, lung cancer
Procedia PDF Downloads 156447 Nutritional Status of Middle School Students and Their Selected Eating Behaviours
Authors: K. Larysz, E. Grochowska-Niedworok, M. Kardas, K. Brukalo, B. Calyniuk, R. Polaniak
Abstract:
Eating behaviours and habits are one of the main factors affecting health. Abnormal nutritional status is a growing problem related to nutritional errors. The number of adolescents presenting excess body weight is also rising. The body's demand for all nutrients increases in the period of intensive development, i.e., during puberty. A varied, well-balanced diet and elimination of unhealthy habits are two of the key factors that contribute to the proper development of a young body. The aim of the study was to assess the nutritional status and selected eating behaviours/habits in adolescents attending middle school. An original questionnaire including 24 questions was conducted. A total of 401 correctly completed questionnaires were qualified for the assessment. Body mass index (BMI) was calculated. Furthermore, the frequency of breakfast consumption, the number of meals per day, types of snacks and sweetened beverages, as well as the frequency of consuming fruit and vegetables, dairy products and fast-foods were assessed. The obtained results were analysed statistically. The study showed that malnutrition was more of a problem than overweight or obesity among middle school students. More than 71% of middle school students have breakfast, whereas almost 30% of adolescents skip this meal. Up to 57.6% of respondents most often consume sweets at school. A total of 37% of adolescents consume sweetened beverages daily or almost every day. Most of the respondents consume an optimal number of meals daily. Only 24.7% of respondents consume fruit and vegetables more than once daily. The majority of respondents (49.40%) declared that they consumed fast food several times a month. Satisfactory frequency of consuming dairy products was reported by 32.7% of middle school students. Conclusions of our study: 1. Malnutrition is more of a problem than overweight or obesity among middle school students. They consume excessive amounts of sweets, sweetened beverages, and fast foods. 2. The consumption of fruit and vegetables was too low in the study group. The intake of dairy products was also low in some cases. 3. A statistically significant correlation was found between the frequency of fast food consumption and the intake of sweetened beverages. A low correlation was found between nutritional status and the number of meals per day. The number of meals consumed by these individuals decreased with increasing nutritional status.Keywords: adolescent, malnutrition, nutrition, nutritional status, obesity
Procedia PDF Downloads 135446 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 215445 Exploring Mothers' Knowledge and Experiences of Attachment in the First 1000 Days of Their Child's Life
Authors: Athena Pedro, Zandile Batweni, Laura Bradfield, Michael Dare, Ashley Nyman
Abstract:
The rapid growth and development of an infant in the first 1000 days of life means that this time period provides the greatest opportunity for a positive developmental impact on a child’s life socially, emotionally, cognitively and physically. Current research is being focused on children in the first 1000 days, but there is a lack of research and understanding of mothers and their experiences during this crucial time period. Thus, it is imperative that more research is done to help better understand the experiences of mothers during the first 1000 days of their child’s life, as well as gain more insight into mothers’ knowledge regarding this time period. The first 1000 days of life, from conception to two years, is a critical period, and the child’s attachment to his or her mother or primary caregiver during this period is crucial for a multitude of future outcomes. The aim of this study was to explore mothers’ understanding and experience of the first 1000 days of their child’s life, specifically looking at attachment in the context of Bowlby and Ainsworths’ attachment theory. Using a qualitative methodological framework, data were collected through semi-structured individual interviews with 12 first-time mothers from low-income communities in Cape Town. Thematic analysis of the data revealed that mothers articulated the importance of attachment within the first 1000 days of life and shared experiences of how they bond and form attachment with their babies. Furthermore, these mothers expressed their belief in the long-term effects of early attachment of responsive positive parenting as well as the lasting effects of poor attachment and non-responsive parenting. This study has implications for new mothers and healthcare staff working with mothers of new-born babies, as well as for future contextual research. By gaining insight into the mothers’ experiences, policies and intervention efforts can be formulated in order to assist mothers during this time, which ultimately promote the healthy development of the nation’s children and future adult generation. If researchers are also able to understand the extent of mothers’ general knowledge regarding the first 1000 days and attachment, then there will be a better understanding of where there may be gaps in knowledge and thus, recommendations for effective and relevant intervention efforts may be provided. These interventions may increase knowledge and awareness of new mothers and health care workers at clinics and other service providers, creating a high impact on positive outcome. Thus, improving the developmental trajectory for many young babies allows them the opportunity to pursue optimal development by reaching their full potential.Keywords: attachment, experience, first 1000 days, knowledge, mothers
Procedia PDF Downloads 178444 The Algerian Experience in Developing Higher Education in the Country in Light of Modern Technology: Challenges and Prospects
Authors: Mohammed Messaoudi
Abstract:
The higher education sector in Algeria has witnessed in recent years a remarkable transformation, as it witnessed the integration of institutions within the modern technological environment and harnessing all appropriate mechanisms to raise the level of education and the level of training. Observers and those interested that it is necessary for the Algerian university to enter this field, especially with the efforts that seek to employ modern technology in the sector and encourage investment in this field, in addition to the state’s keenness to move towards building a path to benefit from modern technology, and to encourage energies in light of a reality that carries many Aspirations and challenges by achieving openness to the new digital environment and keeping pace with the ranks of international universities. Higher education is one of the engines of development for societies, as it is a vital field for the transfer of knowledge and scientific expertise, and the university is at the top of the comprehensive educational system for various disciplines in light of the achievement of a multi-dimensional educational system, and amid the integration of three basic axes that establish the sound educational process (teaching, research, relevant outputs efficiency), and according to a clear strategy that monitors the advancement of academic work, and works on developing its future directions to achieve development in this field. The Algerian University is considered one of the service institutions that seeks to find the optimal mechanisms to keep pace with the changes of the times, as it has become necessary for the university to enter the technological space and thus ensure the quality of education in it and achieve the required empowerment by dedicating a structure that matches the requirements of the challenges on which the sector is based, amid unremitting efforts to develop the capabilities. He sought to harness the mechanisms of communication and information technology and achieve transformation at the level of the higher education sector with what is called higher education technology. The conceptual framework of information and communication technology at the level of higher education institutions in Algeria is determined through the factors of organization, factors of higher education institutions, characteristics of the professor, characteristics of students, the outcomes of the educational process, and there is a relentless pursuit to achieve a positive interaction between these axes as they are basic components on which the success and achievement of higher education are based for his goals.Keywords: Information and communication technology, Algerian university, scientific and cognitive development, challenges
Procedia PDF Downloads 83443 Improving School Design through Diverse Stakeholder Participation in the Programming Phase
Authors: Doris C. C. K. Kowaltowski, Marcella S. Deliberador
Abstract:
The architectural design process, in general, is becoming more complex, as new technical, social, environmental, and economical requirements are imposed. For school buildings, this scenario is also valid. The quality of a school building depends on known design criteria and professional knowledge, as well as feedback from building performance assessments. To attain high-performance school buildings, a design process should add a multidisciplinary team, through an integrated process, to ensure that the various specialists contribute at an early stage to design solutions. The participation of stakeholders is of special importance at the programming phase when the search for the most appropriate design solutions is underway. The composition of a multidisciplinary team should comprise specialists in education, design professionals, and consultants in various fields such as environmental comfort and psychology, sustainability, safety and security, as well as administrators, public officials and neighbourhood representatives. Users, or potential users (teachers, parents, students, school officials, and staff), should be involved. User expectations must be guided, however, toward a proper understanding of a response of design to needs to avoid disappointment. In this context, appropriate tools should be introduced to organize such diverse participants and ensure a rich and focused response to needs and a productive outcome of programming sessions. In this paper, different stakeholder in a school design process are discussed in relation to their specific contributions and a tool in the form of a card game is described to structure the design debates and ensure a comprehensive decision-making process. The game is based on design patterns for school architecture as found in the literature and is adapted to a specific reality: State-run public schools in São Paulo, Brazil. In this State, school buildings are managed by a foundation called Fundação para o Desenvolvimento da Educação (FDE). FDE supervises new designs and is responsible for the maintenance of ~ 5000 schools. The design process of this context was characterised with a recommendation to improve the programming phase. Card games can create a common environment, to which all participants can relate and, therefore, can contribute to briefing debates on an equal footing. The cards of the game described here represent essential school design themes as found in the literature. The tool was tested with stakeholder groups and with architecture students. In both situations, the game proved to be an efficient tool to stimulate school design discussions and to aid in the elaboration of a rich, focused and thoughtful architectural program for a given demand. The game organizes the debates and all participants are shown to spontaneously contribute each in his own field of expertise to the decision-making process. Although the game was specifically based on a local school design process it shows potential for other contexts because the content is based on known facts, needs and concepts of school design, which are global. A structured briefing phase with diverse stakeholder participation can enrich the design process and consequently improve the quality of school buildings.Keywords: architectural program, design process, school building design, stakeholder
Procedia PDF Downloads 405442 Intensity Modulated Radiotherapy of Nasopharyngeal Carcinomas: Patterns of Loco Regional Relapse
Authors: Omar Nouri, Wafa Mnejja, Nejla Fourati, Fatma Dhouib, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Jamel Daoud
Abstract:
Background and objective: Induction chemotherapy (IC) followed by concomitant chemo radiotherapy with intensity modulated radiation (IMRT) technique is actually the recommended treatment modality for locally advanced nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the prognostic factors predicting loco regional relapse with this new treatment protocol. Patients and methods: A retrospective study of 52 patients with NPC treated between June 2016 and July 2019. All patients received IC according to the protocol of the Head and Neck Radiotherapy Oncology Group (Gortec) NPC 2006 (3 TPF courses) followed by concomitant chemo radiotherapy with weekly cisplatin (40 mg / m2). Patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. Median age was 49 years (19-69) with a sex ratio of 3.3. Forty five tumors (86.5%) were classified as stages III - IV according to the 2017 UICC TNM classification. Loco regional relapse (LRR) was defined as a local and/or regional progression that occurs at least 6 months after the end of treatment. Survival analysis was performed according to Kaplan-Meier method and Log-rank test was used to compare anatomy clinical and therapeutic factors that may influence loco regional free survival (LRFS). Results: After a median follow up of 42 months, 6 patients (11.5%) experienced LRR. A metastatic relapse was also noted for 3 of these patients (50%). Target volumes coverage was optimal for all patient with LRR. Four relapses (66.6%) were in high-risk target volume and two (33.3%) were borderline. Three years LRFS was 85,9%. Four factors predicted loco regional relapses: histologic type other than undifferentiated (UCNT) (p=0.027), a macroscopic pre chemotherapy tumor volume exceeding 100 cm³ (p=0.005), a reduction in IC doses exceeding 20% (p=0.016) and a total cumulative cisplatin dose less than 380 mg/m² (p=0.0.34). TNM classification and response to IC did not impact loco regional relapses. Conclusion: For nasopharyngeal carcinoma, tumors with initial high volume and/or histologic type other than UCNT, have a higher risk of loco regional relapse. Therefore, they require a more aggressive therapeutic approaches and a suitable monitoring protocol.Keywords: loco regional relapse, modulation intensity radiotherapy, nasopharyngeal carcinoma, prognostic factors
Procedia PDF Downloads 128