Search results for: urban harvest approach
1488 Ionic Liquids-Polymer Nanoparticle Systems as Breakthrough Tools to Improve the Leprosy Treatment
Authors: A. Julio, R. Caparica, S. Costa Lima, S. Reis, J. G. Costa, P. Fonte, T. Santos De Almeida
Abstract:
The Mycobacterium leprae causes a chronic and infectious disease called leprosy, which the most common symptoms are peripheral neuropathy and deformation of several parts of the body. The pharmacological treatment of leprosy is a combined therapy with three different drugs, rifampicin, clofazimine, and dapsone. However, clofazimine and dapsone have poor solubility in water and also low bioavailability. Thus, it is crucial to develop strategies to overcome such drawbacks. The use of ionic liquids (ILs) may be a strategy to overcome the low solubility since they have been used as solubility promoters. ILs are salts, liquid below 100 ºC or even at room temperature, that may be placed in water, oils or hydroalcoholic solutions. Another approach may be the encapsulation of drugs into polymeric nanoparticles, which improves their bioavailability. In this study, two different classes of ILs were used, the imidazole- and the choline-based ionic liquids, as solubility enhancers of the poorly soluble antileprotic drugs. Thus, after the solubility studies, it was developed IL-PLGA nanoparticles hybrid systems to deliver such drugs. First of all, the solubility studies of clofazimine and dapsone were performed in water and in water: IL mixtures, at ILs concentrations where cell viability is maintained, at room temperature for 72 hours. For both drugs, it was observed an improvement on the drug solubility and [Cho][Phe] showed to be the best solubility enhancer, especially for clofazimine, where it was observed a 10-fold improvement. Later, it was produced nanoparticles, with a polymeric matrix of poly(lactic-co-glycolic acid) (PLGA) 75:25, by a modified solvent-evaporation W/O/W double emulsion technique in the presence of [Cho][Phe]. Thus, the inner phase was an aqueous solution of 0.2 % (v/v) of the above IL with each drug to its maximum solubility determined on the previous study. After the production, the nanosystem hybrid was physicochemically characterized. The produced nanoparticles had a diameter of around 580 nm and 640 nm, for clofazimine and dapsone, respectively. Regarding the polydispersity index, it was in agreement of the recommended value of this parameter for drug delivery systems (around 0.3). The association efficiency (AE) of the developed hybrid nanosystems demonstrated promising AE values for both drugs, given their low solubility (64.0 ± 4.0 % for clofazimine and 58.6 ± 10.0 % for dapsone), that prospects the capacity of these delivery systems to enhance the bioavailability and loading of clofazimine and dapsone. Overall, the study achievement may signify an upgrading of the patient’s quality of life, since it may mean a change in the therapeutic scheme, not requiring doses of drug so high to obtain a therapeutic effect. The authors would like to thank Fundação para a Ciência e a Tecnologia, Portugal (FCT/MCTES (PIDDAC), UID/DTP/04567/2016-CBIOS/PRUID/BI2/2018).Keywords: ionic liquids, ionic liquids-PLGA nanoparticles hybrid systems, leprosy treatment, solubility
Procedia PDF Downloads 1531487 Neuromyelitis Optica area Postrema Syndrome(NMOSD-APS) in a Fifteen-year-old Girl: A Case Report
Authors: Merilin Ivanova Ivanova, Kalin Dimitrov Atanasov, Stefan Petrov Enchev
Abstract:
Backgroud: Neuromyelitis optica spectrum disorder, also known as Devic’s disease, is a relapsing demyelinating autoimmune inflammatory disorder of the central nervous system associated with anti-aquaporin 4 (AQP4) antibodies that can manifest with devastating secondary neurological deficits. Most commonly affected are the optic nerves and the spinal cord-clinically this is often presented with optic neuritis (loss of vision), transverse myelitis(weakness or paralysis of extremities),lack of bladder and bowel control, numbness. APS is a core clinical entity of NMOSD and adds to the clinical representation the following symptoms: intractable nausea, vomiting and hiccup, it usually occurs isolated at onset, and can lead to a significant delay in the diagnosis. The condition may have features similar to multiple sclerosis (MS) but the episodes are worse in NMO and it is treated differently. It could be relapsing or monophasic. Possible complications are visual field defects and motor impairment, with potential blindness and irreversible motor deficits. In severe cases, myogenic respiratory failure ensues. The incidence of reported cases is approximately 0.3–4.4 per 100,000. Paediatric cases of NMOSD are rare but have been reported occasionally, comprising less than 5% of the reported cases. Objective: The case serves to show the difficulty when it comes to the diagnostic processes regarding a rare autoimmune disease with non- specific symptoms, taking large interval of rimes to reveal as complete clinical manifestation of the aforementioned syndrome, as well as the necessity of multidisciplinary approach in the setting of а general paediatric department in аn emergency hospital. Methods: itpatient's history, clinical presentation, and information from the used diagnostic tools(MRI with contrast of the central nervous system) lead us to the conclusion .This was later on confirmed by the positive results from the anti-aquaporin 4 (AQP4) antibody serology test. Conclusion: APS is a common symptom of NMOSD and is considered a challenge in a differential-diagnostic plan. Gaining an increased awareness of this disease/syndrome, obtaining a detailed patient history, and performing thorough physical examinations are essential if we are to reduce and avoid misdiagnosis.Keywords: neuromyelitis, devic's disease, hiccup, autoimmune, MRI
Procedia PDF Downloads 401486 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing
Authors: Maria Ahmad, Michael Page, Danë Goodsman
Abstract:
‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course
Procedia PDF Downloads 2181485 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms
Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.Keywords: anomaly detection, clustering, pattern recognition, web sessions
Procedia PDF Downloads 2881484 Evaluation of Occupational Doses in Interventional Radiology
Authors: Fernando Antonio Bacchim Neto, Allan Felipe Fattori Alves, Maria Eugênia Dela Rosa, Regina Moura, Diana Rodrigues De Pina
Abstract:
Interventional Radiology is the radiology modality that provides the highest dose values to medical staff. Recent researches show that personal dosimeters may underestimate dose values in interventional physicians, especially in extremities (hands and feet) and eye lens. The aim of this work was to study radiation exposure levels of medical staff in different interventional radiology procedures and estimate the annual maximum numbers of procedures (AMN) that each physician could perform without exceed the annual limits of dose established by normative. For this purpose LiF:Mg,Ti (TLD-100) dosimeters were positioned in different body regions of the interventional physician (eye lens, thyroid, chest, gonads, hand and foot) above the radiological protection vests as lead apron and thyroid shield. Attenuation values for lead protection vests were based on international guidelines. Based on these data were chosen as 90% attenuation of the lead vests and 60% attenuation of the protective glasses. 25 procedures were evaluated: 10 diagnostics, 10 angioplasty, and 5-aneurysm treatment. The AMN of diagnostic procedures was 641 for the primary interventional radiologist and 930 for the assisting interventional radiologist. For the angioplasty procedures, the AMN for primary interventional radiologist was 445 and for assisting interventional radiologist was 1202. As for the procedures of aneurism treatment, the AMN for the primary interventional radiologist was 113 and for the assisting interventional radiologist were 215. All AMN were limited by the eye lens doses already considering the use of protective glasses. In all categories evaluated, the higher dose values are found in gonads and in the lower regions of professionals, both for the primary interventionist and for the assisting, but the eyes lens dose limits are smaller than these regions. Additional protections as mobile barriers, which can be positioned between the interventionist and the patient, can decrease the exposures in the eye lens, providing a greater protection for the medical staff. The alternation of professionals to perform each type of procedure can reduce the dose values received by them over a period. The analysis of dose profiles proposed in this work showed that personal dosimeters positioned in chest might underestimate dose values in other body parts of the interventional physician, especially in extremities and eye lens. As each body region of the interventionist is subject to different levels of exposure, dose distribution in each region provides a better approach to what actions are necessary to ensure the radiological protection of medical staff.Keywords: interventional radiology, radiation protection, occupationally exposed individual, hemodynamic
Procedia PDF Downloads 3941483 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience
Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi
Abstract:
Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit
Procedia PDF Downloads 1301482 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method
Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare
Abstract:
The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test
Procedia PDF Downloads 1211481 A Comparative Analysis of Liberation and Contemplation in Sankara and Aquinas
Authors: Zeite Shumneiyang Koireng
Abstract:
Liberation is the act of liberating or the state of being liberated. Indian philosophy, in general, understands liberation as moksa, which etymological is derived from the Sanskrit root muc+ktin meaning to loose, set free, to let go, discharge, release, liberate, deliver, etc. According to Indian schools of thought, moksa is the highest value on realizing which nothing remains to be realized. It is the cessation of birth and death, all kinds of pain and at the same time, it is the realization of one’s own self. Sankara’s Advaita philosophy is based on the following propositions: Brahman is the only Reality; the world has apparent reality, and the soul is not different from Brahman. According to Sankara, Brahman is the basis on which the world form appears; it is the sustaining ground of all various modification. It is the highest self and the self of all reveals himself by dividing himself [ as it was in the form of various objects] in multiple ways. The whole world is the manifestation of the Supreme Being. Brahman modifying itself into the Atman or internal self of all things is the world. Since Brahman is the Upadhana karana of the world, the sruti speaks of the world as the modification of Brahman into the Atman of the effect. Contemplation as the fulfillment of man finds a radical foundation in Aquinas teaching concerning the natural end or as he also referred to it, natural desire. The third book of the Summa Contra Gentiles begins the study of happiness with a consideration of natural desire. According to him, all creatures, even those devoid of understanding are ordered to God as an ultimate end. Intrinsically, a part of every nature is a tendency or inclination, originating in the natural form and tendency toward the end for which the possessor of nature exists. It is the study of the nature and finality of inclination that Aquinas establishes through an argument of induction man’s Contemplation of God as the fulfillment of his nature. The present paper is attempted to critically approach two important, seminal and originated thought, representing Indian and Western traditions which mark on the thinking of their respective times. Both these thoughts- Advaitic concept of Liberation in the Indian tradition and the concept of Contemplation in Thomas Aquinas’ Summa Contra Gentiles’- confront directly the question of the ultimate meaning of human existence. According to Sankara, it is knowledge and knowledge alone which is the means of moksa and the highest knowledge is moksa itself. Liberation in Sankara Vedanta is attained as a process of purification of self, which gradually and increasingly turns into purer and purer intentional construction. Man’s inner natural tendency for Aquinas is towards knowledge. The human subject is driven to know more and more about reality and in particular about the highest reality. Contemplation of this highest reality is fulfillment in the philosophy of Aquinas. Rather, Contemplation is the perfect activity in man’s present state of existence.Keywords: liberation, Brahman, contemplation, fulfillment
Procedia PDF Downloads 1951480 Combining the Production of Radiopharmaceuticals with the Department of Radionuclide Diagnostics
Authors: Umedov Mekhroz, Griaznova Svetlana
Abstract:
In connection with the growth of oncological diseases, the design of centers for diagnostics and the production of radiopharmaceuticals is the most relevant area of healthcare facilities. The design of new nuclear medicine centers should be carried out from the standpoint of solving the following tasks: the availability of medical care, functionality, environmental friendliness, sustainable development, improving the safety of drugs, the use of which requires special care, reducing the rate of environmental pollution, ensuring comfortable conditions for the internal microclimate, adaptability. The purpose of this article is to substantiate architectural and planning solutions, formulate recommendations and principles for the design of nuclear medicine centers and determine the connections between the production and medical functions of a building. The advantages of combining the production of radiopharmaceuticals and the department of medical care: less radiation activity is accumulated, the cost of the final product is lower, and there is no need to hire a transport company with a special license for transportation. A medical imaging department is a structural unit of a medical institution in which diagnostic procedures are carried out in order to gain an idea of the internal structure of various organs of the body for clinical analysis. Depending on the needs of a particular institution, the department may include various rooms that provide medical imaging using radiography, ultrasound diagnostics, and the phenomenon of nuclear magnetic resonance. The production of radiopharmaceuticals is an object intended for the production of a pharmaceutical substance containing a radionuclide and intended for introduction into the human body or laboratory animal for the purpose of diagnosis, evaluation of the effectiveness of treatment, or for biomedical research. The research methodology includes the following subjects: study and generalization of international experience in scientific research, literature, standards, teaching aids, and design materials on the topic of research; An integrated approach to the study of existing international experience of PET / CT scan centers and the production of radiopharmaceuticals; Elaboration of graphical analysis and diagrams based on the system analysis of the processed information; Identification of methods and principles of functional zoning of nuclear medicine centers. The result of the research is the identification of the design principles of nuclear medicine centers with the functions of the production of radiopharmaceuticals and the department of medical imaging. This research will be applied to the design and construction of healthcare facilities in the field of nuclear medicine.Keywords: architectural planning solutions, functional zoning, nuclear medicine, PET/CT scan, production of radiopharmaceuticals, radiotherapy
Procedia PDF Downloads 891479 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 1931478 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 3331477 The Effect of Artificial Intelligence on Banking Development and Progress
Authors: Mina Malak Hanna Saad
Abstract:
New strategies for supplying banking services to the customer have been brought, which include online banking. Banks have begun to recall electronic banking (e-banking) as a manner to replace some conventional department features by means of the usage of the internet as a brand-new distribution channel. A few clients have at least one account at multiple banks and get admission to those debts through online banking. To test their present-day internet worth, customers need to log into each of their debts, get particular statistics, and paint closer to consolidation. Not only is it time-ingesting; however, but it is also a repeatable activity with a certain frequency. To solve this problem, the idea of account aggregation was delivered as a solution. Account consolidation in e-banking as a form of digital banking appears to build stronger dating with clients. An account linking service is usually known as a service that permits customers to manipulate their bank accounts held at exceptional institutions through a common online banking platform that places a high priority on safety and statistics protection. The object affords an outline of the account aggregation approach in e-banking as a distinct carrier in the area of e-banking. The advanced facts generation is becoming a vital thing in the improvement of financial services enterprise, specifically the banking enterprise. It has brought different ways of delivering banking to the purchaser, which includes net Banking. Banks began to study electronic banking (e-banking) as a means to update some of their traditional branch functions and the use of the net as a distribution channel. Some clients have at least multiple accounts throughout banks and get the right of entry to that money owed through the usage of e-banking offerings. To examine the contemporary internet's well-worth position, customers have to log in to each of their money owed, get the information and work on consolidation. This no longer takes sufficient time; however, it is a repetitive interest at a specified frequency. To address this point, an account aggregation idea is brought as an answer. E-banking account aggregation, as one of the e-banking kinds, appeared to construct a more potent dating with clients. Account Aggregation carrier usually refers to a service that allows clients to control their bank bills maintained in one-of-a-kind institutions via a common Internet banking working platform, with an excessive subject to protection and privateness. This paper offers an overview of an e-banking account aggregation technique as a new provider in the e-banking field.Keywords: compatibility, complexity, mobile banking, observation, risk banking technology, Internet banks, modernization of banks, banks, account aggregation, security, enterprise developmente-banking, enterprise development
Procedia PDF Downloads 411476 Nutrition Transition in Bangladesh: Multisectoral Responsiveness of Health Systems and Innovative Measures to Mobilize Resources Are Required for Preventing This Epidemic in Making
Authors: Shusmita Khan, Shams El Arifeen, Kanta Jamil
Abstract:
Background: Nutrition transition in Bangladesh has progressed across various relevant socio-demographic contextual issues. For a developing country like Bangladesh, its is believed that, overnutrition is less prevalent than undernutrition. However, recent evidence suggests that a rapid shift is taking place where overweight is subduing underweight. With this rapid increase, for Bangladesh, it will be challenging to achieve the global agenda on halting overweight and obesity. Methods: A secondary analysis was performed from six successive national demographic and health surveys to get the trend on undernutrition and overnutrition for women from reproductive age. In addition, national relevant policy papers were reviewed to determine the countries readiness for whole of the systems approach to tackle this epidemic. Results: Over the last decade, the proportion of women with low body mass index (BMI<18.5), an indicator of undernutrition, has decreased markedly from 34% to 19%. However, the proportion of overweight women (BMI ≥25) increased alarmingly from 9% to 24% over the same period. If the WHO cutoff for public health action (BMI ≥23) is used, the proportion of overweight women has increased from 17% in 2004 to 39% in 2014. The increasing rate of obesity among women is a major challenge to obstetric practice for both women and fetuses. In the long term, overweight women are also at risk of future obesity, diabetes, hyperlipidemia, hypertension, and heart disease. These diseases have serious impact on health care systems. Costs associated with overweight and obesity involves direct and indirect costs. Direct costs include preventive, diagnostic, and treatment services related to obesity. Indirect costs relate to morbidity and mortality costs including productivity. Looking at the Bangladesh Health Facility Survey, it is found that the country is bot prepared for providing nutrition-related health services, regarding prevention, screening, management and treatment. Therefore, if this nutrition transition is not addressed properly, Bangladesh will not be able to achieve the target of the NCD global monitoring framework of the WHO. Conclusion: Addressing this nutrition transition requires contending ‘malnutrition in all its forms’ and addressing it with integrated approaches. Whole of the systems action is required at all levels—starting from improving multi-sectoral coordination to scaling up nutrition-specific and nutrition-sensitive mainstreamed interventions keeping health system in mind.Keywords: nutrition transition, Bangladesh, health system, undernutrition, overnutrition, obesity
Procedia PDF Downloads 2891475 A Theoretical Framework of Patient Autonomy in a High-Tech Care Context
Authors: Catharina Lindberg, Cecilia Fagerstrom, Ania Willman
Abstract:
Patients in high-tech care environments are usually dependent on both formal/informal caregivers and technology, highlighting their vulnerability and challenging their autonomy. Autonomy presumes that a person has education, experience, self-discipline and decision-making capacity. Reference to autonomy in relation to patients in high-tech care environments could, therefore, be considered paradoxical, as in most cases these persons have impaired physical and/or metacognitive capacity. Therefore, to understand the prerequisites for patients to experience autonomy in high-tech care environments and to support them, there is a need to enhance knowledge and understanding of the concept of patient autonomy in this care context. The development of concepts and theories in a practice discipline such as nursing helps to improve both nursing care and nursing education. Theoretical development is important when clarifying a discipline, hence, a theoretical framework could be of use to nurses in high-tech care environments to support and defend the patient’s autonomy. A meta-synthesis was performed with the intention to be interpretative and not aggregative in nature. An amalgamation was made of the results from three previous studies, carried out by members of the same research group, focusing on the phenomenon of patient autonomy from a patient perspective within a caring context. Three basic approaches to theory development: derivation, synthesis, and analysis provided an operational structure that permitted the researchers to move back and forth between these approaches during their work in developing a theoretical framework. The results from the synthesis delineated that patient autonomy in a high-tech care context is: To be in control though trust, co-determination, and transition in everyday life. The theoretical framework contains several components creating the prerequisites for patient autonomy. Assumptions and propositional statements that guide theory development was also outlined, as were guiding principles for use in day-to-day nursing care. Four strategies used by patients to remain or obtain patient autonomy in high-tech care environments were revealed: the strategy of control, the strategy of partnership, the strategy of trust, and the strategy of transition. This study suggests an extended knowledge base founded on theoretical reasoning about patient autonomy, providing an understanding of the strategies used by patients to achieve autonomy in the role of patient, in high-tech care environments. When possessing knowledge about the patient perspective of autonomy, the nurse/carer can avoid adopting a paternalistic or maternalistic approach. Instead, the patient can be considered to be a partner in care, allowing care to be provided that supports him/her in remaining/becoming an autonomous person in the role of patient.Keywords: autonomy, caring, concept development, high-tech care, theory development
Procedia PDF Downloads 2081474 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data
Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding
Abstract:
The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)
Procedia PDF Downloads 1511473 The Ongoing Impact of Secondary Stressors on Businesses in Northern Ireland Affected by Flood Events
Authors: Jill Stephenson, Marie Vaganay, Robert Cameron, Caoimhe McGurk, Neil Hewitt
Abstract:
Purpose: The key aim of the research was to identify the secondary stressors experienced by businesses affected by single or repeated flooding and to determine to what extent businesses were affected by these stressors, along with any resulting impact on health. Additionally, the research aimed to establish the likelihood of businesses being re-exposed to the secondary stressors through assessing awareness of flood risk, implementation of property protection measures and level of community resilience. Design/methodology/approach: The chosen research method involved the distribution of a questionnaire survey to businesses affected by either single or repeated flood events. The questionnaire included the Impact of Event Scale (a 15-item self-report measure which assesses subjective distress caused by traumatic events). Findings: 55 completed questionnaires were returned by flood impacted businesses. 89% of the businesses had sustained internal flooding while 11% had experienced external flooding. The results established that the key secondary stressors experienced by businesses, in order of priority, were: flood damage, fear of reoccurring flooding, prevention of access to the premise/closure, loss of income, repair works, length of closure and insurance issues. There was a lack of preparedness for potential future floods and consequent vulnerability to the emergence of secondary stressors among flood affected businesses, as flood resistance or flood resilience measures had only been implemented by 11% and 13% respectively. In relation to the psychological repercussions, the Impact of Event scores suggested that potential prevalence of post-traumatic stress disorder (PTSD) was noted among 8 out of 55 respondents (l5%). Originality/value: The results improve understanding of the enduring repercussions of flood events on businesses, indicating that not only residents may be susceptible to the detrimental health impacts of flood events and single flood events may be just as likely as reoccurring flooding to contribute to ongoing stress. Lack of financial resources is a possible explanation for the lack of implementation of property protection measures among businesses, despite 49% experiencing flooding on multiple occasions. Therefore it is recommended that policymakers should consider potential sources of financial support or grants towards flood defences for flood impacted businesses. Any form of assistance should be made available to businesses at the earliest opportunity as there was no significant association between the time of the last flood event and the likelihood of experiencing PTSD symptoms.Keywords: flood event, flood resilience, flood resistance, PTSD, secondary stressors
Procedia PDF Downloads 4321472 On Early Verb Acquisition in Chinese-Speaking Children
Authors: Yating Mu
Abstract:
Young children acquire native language with amazing rapidity. After noticing this interesting phenomenon, lots of linguistics, as well as psychologists, devote themselves to exploring the best explanations. Thus researches on first language acquisition emerged. Early lexical development is an important branch of children’s FLA (first language acquisition). Verb, the most significant class of lexicon, the most grammatically complex syntactic category or word type, is not only the core of exploring syntactic structures of language but also plays a key role in analyzing semantic features. Obviously, early verb development must have great impacts on children’s early lexical acquisition. Most scholars conclude that verbs, in general, are very difficult to learn because the problem in verb learning might be more about mapping a specific verb onto an action or event than about learning the underlying relational concepts that the verb or relational term encodes. However, the previous researches on early verb development mainly focus on the argument about whether there is a noun-bias or verb-bias in children’s early productive vocabulary. There are few researches on general characteristics of children’s early verbs concerning both semantic and syntactic aspects, not mentioning a general survey on Chinese-speaking children’s verb acquisition. Therefore, the author attempts to examine the general conditions and characteristics of Chinese-speaking children’s early productive verbs, based on data from a longitudinal study on three Chinese-speaking children. In order to present an overall picture of Chinese verb development, both semantic and syntactic aspects will be focused in the present study. As for semantic analysis, a classification method is adopted first. Verb category is a sophisticated class in Mandarin, so it is quite necessary to divide it into small sub-types, thus making the research much easier. By making a reasonable classification of eight verb classes on basis of semantic features, the research aims at finding out whether there exist any universal rules in Chinese-speaking children’s verb development. With regard to the syntactic aspect of verb category, a debate between nativist account and usage-based approach has lasted for quite a long time. By analyzing the longitudinal Mandarin data, the author attempts to find out whether the usage-based theory can fully explain characteristics in Chinese verb development. To sum up, this thesis attempts to apply the descriptive research method to investigate the acquisition and the usage of Chinese-speaking children’s early verbs, on purpose of providing a new perspective in investigating semantic and syntactic features of early verb acquisition.Keywords: Chinese-speaking children, early verb acquisition, verb classes, verb grammatical structures
Procedia PDF Downloads 3671471 Analyzing Competitive Advantage of Internet of Things and Data Analytics in Smart City Context
Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue
Abstract:
The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic hasnot only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of the normal design, construction, and operation of cities provides a unique opportunity to improve connection between people. Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the researchcontribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create a competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.Keywords: internet of things, data analytics, smart cities, competitive advantage
Procedia PDF Downloads 951470 Transport Mode Selection under Lead Time Variability and Emissions Constraint
Authors: Chiranjit Das, Sanjay Jharkharia
Abstract:
This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection
Procedia PDF Downloads 4361469 Stimulus-Response and the Innateness Hypothesis: Childhood Language Acquisition of “Genie”
Authors: Caroline Kim
Abstract:
Scholars have long disputed the relationship between the origins of language and human behavior. Historically, behaviorist psychologist B. F. Skinner argued that language is one instance of the general stimulus-response phenomenon that characterizes the essence of human behavior. Another, more recent approach argues, by contrast, that language is an innate cognitive faculty and does not arise from behavior, which might develop and reinforce linguistic facility but is not its source. Pinker, among others, proposes that linguistic defects arise from damage to the brain, both congenital and acquired in life. Much of his argument is based on case studies in which damage to the Broca’s and Wernicke’s areas of the brain results in loss of the ability to produce coherent grammatical expressions when speaking or writing; though affected speakers often utter quite fluent streams of sentences, the words articulated lack discernible semantic content. Pinker concludes on this basis that language is an innate component of specific, classically language-correlated regions of the human brain. Taking a notorious 1970s case of linguistic maladaptation, this paper queries the dominant materialist paradigm of language-correlated regions. Susan “Genie” Wiley was physically isolated from language interaction in her home and beaten by her father when she attempted to make any sort of sound. Though without any measurable resulting damage to the brain, Wiley was never able to develop the level of linguistic facility normally achieved in adulthood. Having received a negative reinforcement of language acquisition from her father and lacking the usual language acquisition period, in adulthood Wiley was able to develop language only at a quite limited level in later life. From a contemporary behaviorist perspective, this case confirms the possibility of language deficiency without brain pathology. Wiley’s potential language-determining areas in the brain were intact, and she was exposed to language later in her life, but she was unable to achieve the normal level of communication skills, deterring socialization. This phenomenon and others like it in the case limited literature on linguistic maladaptation pose serious clinical, scientific, and indeed philosophical difficulties for both of the major competing theories of language acquisition, innateness, and linguistic stimulus-response. The implications of such cases for future research in language acquisition are explored, with a particular emphasis on the interaction of innate capacity and stimulus-based development in early childhood.Keywords: behaviorism, innateness hypothesis, language, Susan "Genie" Wiley
Procedia PDF Downloads 2941468 A Deep Dive into the Multi-Pronged Nature of Student Engagement
Authors: Rosaline Govender, Shubnam Rambharos
Abstract:
Universities are, to a certain extent, the source of under-preparedness ideologically, structurally, and pedagogically, particularly since organizational cultures often alienate students by failing to enable epistemological access. This is evident in the unsustainably low graduation rates that characterize South African higher education, which indicate that under 30% graduate in minimum time, under two-thirds graduate within 6 years, and one-third have not graduated after 10 years. Although the statistics for the Faculty of Accounting and Informatics at the Durban University of Technology (DUT) in South Africa have improved significantly from 2019 to 2021, the graduation (32%), throughput (50%), and dropout rates (16%) are still a matter for concern as the graduation rates, in particular, are quite similar to the national statistics. For our students to succeed, higher education should take a multi-pronged approach to ensure student success, and student engagement is one of the ways to support our students. Student engagement depends not only on students’ teaching and learning experiences but, more importantly, on their social and academic integration, their sense of belonging, and their emotional connections in the institution. Such experiences need to challenge students academically and engage their intellect, grow their communication skills, build self-discipline, and promote confidence. The aim of this mixed methods study is to explore the multi-pronged nature of student success within the Faculty of Accounting and Informatics at DUT and focuses on the enabling and constraining factors of student success. The sources of data were the Mid-year student experience survey (N=60), the Hambisa Student Survey (N=85), and semi structured focus group interviews with first, second, and third year students of the Faculty of Accounting and Informatics Hambisa program. The Hambisa (“Moving forward”) focus area is part of the Siyaphumelela 2.0 project at DUT and seeks to understand the multiple challenges that are impacting student success which create a large “middle” cohort of students that are stuck in transition within academic programs. Using the lens of the sociocultural influences on student engagement framework, we conducted a thematic analysis of the two surveys and focus group interviews. Preliminary findings indicate that living conditions, choice of program, access to resources, motivation, institutional support, infrastructure, and pedagogical practices impact student engagement and, thus, student success. It is envisaged that the findings from this project will assist the university in being better prepared to enable student success.Keywords: social and academic integration, socio-cultural influences, student engagement, student success
Procedia PDF Downloads 751467 Impact of Research-Informed Teaching and Case-Based Teaching on Memory Retention and Recall in University Students
Authors: Durvi Yogesh Vagani
Abstract:
This research paper explores the effectiveness of Research-informed teaching and Case-based teaching in enhancing the retention and recall of memory during discussions among university students. Additionally, it investigates the impact of using Artificial Intelligence (AI) tools on the quality of research conducted by students and its correlation with better recollection. The study hypothesizes that Case-based teaching will lead to greater recall and storage of information. The research gap in the use of AI in educational settings, particularly with actual participants, is addressed by leveraging a multi-method approach. The hypothesis is that the use of AI, such as ChatGPT and Bard, would lead to better retention and recall of information. Before commencing the study, participants' attention levels and IQ were assessed using the Digit Span Test and the Wechsler Adult Intelligence Scale, respectively, to ensure comparability among participants. Subsequently, participants were divided into four conditions, each group receiving identical information presented in different formats based on their assigned condition. Following this, participants engaged in a group discussion on the given topic. Their responses were then evaluated against a checklist. Finally, participants completed a brief test to measure their recall ability after the discussion. Preliminary findings suggest that students who utilize AI tools for learning demonstrate improved grasping of information and are more likely to integrate relevant information into discussions compared to providing extraneous details. Furthermore, Case-based teaching fosters greater attention and recall during discussions, while Research-informed teaching leads to greater knowledge for application. By addressing the research gap in AI application in education, this study contributes to a deeper understanding of effective teaching methodologies and the role of technology in student learning outcomes. The implication of the present research is to tailor teaching methods based on the subject matter. Case-based teaching facilitates application-based teaching, and research-based teaching can be beneficial for theory-heavy topics. Integrating AI in education. Combining AI with research-based teaching may optimize instructional strategies and deepen learning experiences. This research suggests tailoring teaching methods in psychology based on subject matter. Case-based teaching suits practical subjects, facilitating application, while research-based teaching aids understanding of theory-heavy topics. Integrating AI in education could enhance learning outcomes, offering detailed information tailored to students' needs.Keywords: artificial intelligence, attention, case-based teaching, memory recall, memory retention, research-informed teaching
Procedia PDF Downloads 331466 Design, Prototyping and Testing of Manually Operated Teff Seed Cum Fertilizer Drill for Ethiopian Farmers
Authors: Fentahun Ayu Muche, Yonas Mitiku Degu
Abstract:
Ethiopian farmers traditionally sow Teff seeds using the broadcasting method. However, row sowing offers higher grain yields compared to broadcasting. Despite being introduced to row sowing techniques, many farmers prefer broadcasting due to its simplicity; without proper technology, row sowing is time-consuming, labor-intensive, and physically demanding. The use of suitable row Teff seeder technologies can save time, reduce labor requirements, facilitate weed control, and increase productivity. Unfortunately, previously promoted technologies have not gained significant acceptance due to various limitations. The Agricultural Bureau of the Amhara Region, Ethiopia, has confirmed that row sowing technology significantly improves productivity, yielding results up to twice as high as traditional sowing methods. This innovative approach offers a feasible solution for enhancing Teff production in Ethiopia, contributing to greater precision and efficiency in farming practices. This research aims to design, fabricate, and test a Teff seed-cum-fertilizer drill while addressing the shortcomings of earlier technologies. During the conceptual design phase, eight alternatives were proposed, with the rail-type row Teff seed-cum-fertilizer drill selected for its technical and economic feasibility. The chosen design features five rows with adjustable spacing between 15 cm and 25 cm. It also includes an interchangeable metering mechanism for seeding rates of 5 kg/hectare and 10 kg/hectare. A key focus was placed on the metering mechanism to eliminate power transmission via ground traction, thereby mitigating performance issues caused by wheel skidding. The new design uses pinions that roll over two parallel racks suspended by four posts to transmit motion to the metering unit. Detailed analysis of the selected concept and working mechanism was conducted, and the prototype was manufactured according to specifications from the detailed design. Laboratory and field tests of the fabricated prototype demonstrated good metering mechanism efficiency, with no significant differences between rows. However, the performance of the Teff seed-cum-fertilizer drill is highly sensitive to the seed level in the hopper. Therefore, maintaining the recommended seed level is crucial for ensuring uniform seed distribution during farm operations.Keywords: row teff planter, disc metering, scoop metering, rack and pinion, fertilizer applicator, seed drill
Procedia PDF Downloads 151465 Quality of Life of Elderly and Factors Associated in Bharatpur Metropolitan City, Chitwan: A Mixed Method Study
Authors: Rubisha Adhikari, Rajani Shah
Abstract:
Introduction: Aging is a natural, global and inevitable phenomenon every single person has to go through, and nobody can escape the process. One of the emerging challenges to public health is to improve the quality of later years of life as life expectancy continues to increase. Quality of life (QoL) has grown to be a key goal for many public health initiatives. Population aging has become a global phenomenon as they are growing more quickly in emerging nations than they are in industrialized nations, leaving minimal opportunities to regulate the consequences of the demographic shift. Methods: A community-based descriptive analytical approach was used to examine the quality of life and associated factors among elderly people. A mixed method was chosen for the study. For the quantitative data collection, a household survey was conducted using the WHOQOL-OLD tool. In-depth interviews were conducted among twenty participants for qualitative data collection. Data generated through in-depth interviews were transcribed verbatim. In-depth interviews lasted about an hour and were audio recorded. The in-depth interview guide had been developed by the research team and pilot-tested before actual interviews. Results: This study result showed the association between quality of life and socio-demographic variables. Among all the variables under socio-demographic variable of this study, age (ꭓ2=14.445, p=0.001), gender (ꭓ2=14.323, p=<0.001), marital status (ꭓ2=10.816, p=0.001), education status (ꭓ2=23.948, p=<0.001), household income (ꭓ2=13.493, p=0.001), personal income (ꭓ2=14.129, p=0.001), source of personal income (ꭓ2=28.332,p=<0.001), social security allowance (ꭓ2=18.005,p=<0.001), alcohol consumption (ꭓ2=9.397,p=0.002) are significantly associated with quality of life of elderly. In addition, affordability (ꭓ2=12.088, p=0.001), physical activity (ꭓ2=9.314, p=0.002), emotional support (ꭓ2=9.122, p=0.003), and economic support (ꭓ2=8.104, p=0.004) are associated with quality of life of elderly people. Conclusion: In conclusion, this mixed method study provides insight into the attributes of the quality of life of elderly people in Nepal and similar settings. As the geriatric population is growing in full swing, maintaining a high quality of life has become a major challenge. This study showed that determinants such as age, gender, marital status, education status, household income, personal income, source of personal income, social security allowance and alcohol consumption, economic support, emotional support, affordability and physical activity have an association with quality of life of the elderly.Keywords: ageing, chitwan, elderly, health status, quality of life
Procedia PDF Downloads 711464 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants
Authors: Zarina Chokparova, Ighor Uzhinsky
Abstract:
Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture
Procedia PDF Downloads 1411463 Green Synthesis of Nanosilver-Loaded Hydrogel Nanocomposites for Antibacterial Application
Authors: D. Berdous, H. Ferfera-Harrar
Abstract:
Superabsorbent polymers (SAPs) or hydrogels with three-dimensional hydrophilic network structure are high-performance water absorbent and retention materials. The in situ synthesis of metal nanoparticles within polymeric network as antibacterial agents for bio-applications is an approach that takes advantage of the existing free-space into networks, which not only acts as a template for nucleation of nanoparticles, but also provides long term stability and reduces their toxicity by delaying their oxidation and release. In this work, SAP/nanosilver nanocomposites were successfully developed by a unique green process at room temperature, which involves in situ formation of silver nanoparticles (AgNPs) within hydrogels as a template. The aim of this study is to investigate whether these AgNPs-loaded hydrogels are potential candidates for antimicrobial applications. Firstly, the superabsorbents were prepared through radical copolymerization via grafting and crosslinking of acrylamide (AAm) onto chitosan backbone (Cs) using potassium persulfate as initiator and N,N’-methylenebisacrylamide as the crosslinker. Then, they were hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. Lastly, the AgNPs were biosynthesized and entrapped into hydrogels through a simple, eco-friendly and cost-effective method using aqueous silver nitrate as a silver precursor and curcuma longa tuber-powder extracts as both reducing and stabilizing agent. The formed superabsorbents nanocomposites (Cs-g-PAAm)/AgNPs were characterized by X-ray Diffraction (XRD), UV-visible Spectroscopy, Attenuated Total reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Inductively Coupled Plasma (ICP), and Thermogravimetric Analysis (TGA). Microscopic surface structure analyzed by Transmission Electron Microscopy (TEM) has showed spherical shapes of AgNPs with size in the range of 3-15 nm. The extent of nanosilver loading was decreased by increasing Cs content into network. The silver-loaded hydrogel was thermally more stable than the unloaded dry hydrogel counterpart. The swelling equilibrium degree (Q) and centrifuge retention capacity (CRC) in deionized water were affected by both contents of Cs and the entrapped AgNPs. The nanosilver-embedded hydrogels exhibited antibacterial activity against Escherichia coli and Staphylococcus aureus bacteria. These comprehensive results suggest that the elaborated AgNPs-loaded nanomaterials could be used to produce valuable wound dressing.Keywords: antibacterial activity, nanocomposites, silver nanoparticles, superabsorbent Hydrogel
Procedia PDF Downloads 2471462 Internal Family Systems Parts-Work: A Revolutionary Approach to Reducing Suicide Lethality
Authors: Bill D. Geis
Abstract:
Even with significantly increased spending, suicide rates continue to climb—with alarming increases among traditionally low-risk groups. This has caused clinicians and researchers to call for a complete rethinking of all assumptions about suicide prevention, assessment, and intervention. A form of therapy--Internal Family Systems Therapy--affords tremendous promise in sustained diminishment of lethal suicide risk. Though a form of therapy that is most familiar to trauma therapists, Internal Family Systems Therapy, involving direct work with suicidal parts, is a promising therapy for meaningful and sustained reduction in suicide deaths. Developed by Richard Schwartz, Internal Family Systems Therapy proposes that we are all influenced greatly by internal parts, frozen by development adversities, and these often-contradictory parts contribute invisibly to mood, distress, and behavior. In making research videos of patients from our database and discussing their suicide attempts, it is clear that many persons who attempt suicide are in altered states at the time of their attempt and influenced by factors other than conscious intent. Suicide intervention using this therapy involves direct work with suicidal parts and other interacting parts that generate distress and despair. Internal Family Systems theory posits that deep experiences of pain, fear, aloneness, and distress are defended by a range of different parts that attempt to contain these experiences of pain through various internal activities that unwittingly push forward inhibition, fear, self-doubt, hopelessness, desires to cut and engage in destructive behavior, addictive behavior, and even suicidal actions. These suicidal parts are often created (and “frozen”) at young ages, and these very young parts do not understand the consequences of this influence. Experience suggests that suicidal parts can create impulsive risk behind the scenes when pain is high and emotional support reduced—with significant crisis potential. This understanding of latent suicide risk is consistent with many of our video accounts of serious suicidal acts—compiled in a database of 1104 subjects. Since 2016, consent has been obtained and records kept of 23 highly suicidal patients, with initial Intention-to-Die ratings (0= no intent, 10 = conviction to die) between 5 and 10. In 67% of these cases using IFST parts-work intervention, these highly suicidal patients’ risk was reduced to 0-1, and 83% of cases were reduced to 4 or lower. There were no suicide deaths. Case illustrations will be offered.Keywords: suicide, internal family systems therapy, crisis management, suicide prevention
Procedia PDF Downloads 441461 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 2711460 Frequent Pattern Mining for Digenic Human Traits
Authors: Atsuko Okazaki, Jurg Ott
Abstract:
Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.Keywords: digenic traits, DNA variants, epistasis, statistical genetics
Procedia PDF Downloads 1261459 Stimulating Effects of Media in Improving Quality of Distance Education: A Literature Based Study
Authors: Tahzeeb Mahreen
Abstract:
Distance education refers to giving instruction in which students are remote from the institution and once in a while go to formal demonstration classes, and teaching sessions. Segments of media, for example, radio, TV, PC and Internet and so on are the assets and method for correspondence being utilized as a part of learning material by many open and distance learning institutions. Media has a great part in maximizing the learning opportunities thus enabling distance education, a mode of increased literacy rate of the country. This study goes for analyzing how media had affected distance education through its different mediums. The objectives of the study were (i) to determine the direct impact of media on distance education? (ii) To know how media effects distance education pedagogy (iii) To find out how media works to increase student’s achievement. Literature-based methodology was used, and books, peer-reviewed articles, press reports and internet-based materials were studied as a result. By using descriptive qualitative research analysis, the researcher has interpreted that distance education programs are progressively utilizing mixes of media to convey training that has a positive impact on learning along with a few challenges. In addition, the perception of the researcher varied depending on the programs of distance learning but generally believed that electronic media were moderately more supportive in enhancing the overall performance of the learners. It was concluded that the intellectual style, identity qualities, and self-expectations are the three primary enhanced areas in a student’s educational life in distance education programs. It was portrayed that a comprehension of how individual learners approach learning may make it workable for the distance educator to see an example of learning styles and arrange or modify course presentations through media. Moreover, it is noticed that teaching in distance education address the developing role of the instructor, the requirement for diminishing resistance as conventional teachers utilize remove conveyance frameworks lastly, staff state of mind toward the utilization of innovation. Furthermore, the results showed that media had assumed its part to make distance learning educators more dynamic, capable and concerned about their individual works. The study also indicated a high positive relationship between the media available at study centers and media used by the distance education. The challenge pointed out by the researcher was the clash of distance and time with communication as the life situations of every learner are varied. Recommendations included the realization of the duty of distance learning instructor to help students understand the effective use of media for their study lessons and also to develop online learning communities to be in instant connection with the students.Keywords: distance education, education, media, teaching and learning
Procedia PDF Downloads 142