Search results for: combining forecasts
231 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed
Authors: Esmat Kamel
Abstract:
This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin
Procedia PDF Downloads 319230 Investigating Reading Comprehension Proficiency and Self-Efficacy among Algerian EFL Students within Collaborative Strategic Reading Approach and Attributional Feedback Intervention
Authors: Nezha Badi
Abstract:
It has been shown in the literature that Algerian university students suffer from low levels of reading comprehension proficiency, which hinder their overall proficiency in English. This low level is mainly related to the methodology of teaching reading which is employed by the teacher in the classroom (a teacher-centered environment), as well as students’ poor sense of self-efficacy to undertake reading comprehension activities. Arguably, what is needed is an approach necessary for enhancing students’ self-beliefs about their abilities to deal with different reading comprehension activities. This can be done by providing them with opportunities to take responsibility for their own learning (learners’ autonomy). As a result of learning autonomy, learners’ beliefs about their abilities to deal with certain language tasks may increase, and hence, their language learning ability. Therefore, this experimental research study attempts to assess the extent to which an integrated approach combining one particular reading approach known as ‘collaborative strategic reading’ (CSR), and teacher’s attributional feedback (on students’ reading performance and strategy use) can improve the reading comprehension skill and the sense of self-efficacy of EFL Algerian university students. It also seeks to examine students’ main reasons for their successful or unsuccessful achievements in reading comprehension activities, and whether students’ attributions for their reading comprehension outcomes can be modified after exposure to the instruction. To obtain the data, different tools including a reading comprehension test, questionnaires, an observation, an interview, and learning logs were used with 105 second year Algerian EFL university students. The sample of the study was divided into three groups; one control group (with no treatment), one experimental group (CSR group) who received a CSR instruction, and a second intervention group (CSR Plus group) who received teacher’s attribution feedback in addition to the CSR intervention. Students in the CSR Plus group received the same experiment as the CSR group using the same tools, except that they were asked to keep learning logs, for which teacher’s feedback on reading performance and strategy use was provided. The results of this study indicate that the CSR and the attributional feedback intervention was effective in improving students’ reading comprehension proficiency and sense of self-efficacy. However, there was not a significant change in students’ adaptive and maladaptive attributions for their success and failure d from the pre-test to the post-test phase. Analysis of the perception questionnaire, the interview, and the learning logs shows that students have positive perceptions about the CSR and the attributional feedback instruction. Based on the findings, this study, therefore, seeks to provide EFL teachers in general and Algerian EFL university teachers in particular with pedagogical implications on how to teach reading comprehension to their students to help them achieve well and feel more self-efficacious in reading comprehension activities, and in English language learning more generally.Keywords: attributions, attributional feedback, collaborative strategic reading, self-efficacy
Procedia PDF Downloads 119229 Development of Microsatellite Markers for Dalmatian Pyrethrum Using Next-Generation Sequencing
Authors: Ante Turudic, Filip Varga, Zlatko Liber, Jernej Jakse, Zlatko Satovic, Ivan Radosavljevic, Martina Grdisa
Abstract:
Microsatellites (SSRs) are highly informative repetitive sequences of 2-6 base pairs, which are the most used molecular markers in assessing the genetic diversity of plant species. Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip) is an outcrossing diploid (2n = 18) endemic to the eastern Adriatic coast and source of the natural insecticide pyrethrin. Due to the high repetitiveness and large size of the genome (haploid genome size of 9,58 pg), previous attempts to develop microsatellite markers using the standard methods were unsuccessful. A next-generation sequencing (NGS) approach was applied on genomic DNA extracted from fresh leaves of Dalmatian pyrethrum. The sequencing was conducted using NovaSeq6000 Illumina sequencer, after which almost 400 million high-quality paired-end reads were obtained, with a read length of 150 base pairs. Short reads were assembled by combining two approaches; (1) de-novo assembly and (2) joining of overlapped pair-end reads. In total, 6.909.675 contigs were obtained, with the contig average length of 249 base pairs. Of the resulting contigs, 31.380 contained one or multiple microsatellite sequences, in total 35.556 microsatellite loci were identified. Out of detected microsatellites, dinucleotide repeats were the most frequent, accounting for more than half of all microsatellites identifies (21,212; 59.7%), followed by trinucleotide repeats (9,204; 25.9%). Tetra-, penta- and hexanucleotides had similar frequency of 1,822 (5.1%), 1,472 (4.1%), and 1,846 (5.2%), respectively. Contigs containing microsatellites were further filtered by SSR pattern type, transposon occurrences, assembly characteristics, GC content, and the number of occurrences against the draft genome of T. cinerariifolium published previously. After the selection process, 50 microsatellite loci were used for primer design. Designed primers were tested on samples from five distinct populations, and 25 of them showed a high degree of polymorphism. The selected loci were then genotyped on 20 samples belonging to one population resulting in 17 microsatellite markers. Availability of codominant SSR markers will significantly improve the knowledge on population genetic diversity and structure as well as complex genetics and biochemistry of this species. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).Keywords: genome assembly, NGS, SSR, Tanacetum cinerariifolium
Procedia PDF Downloads 132228 The Coexistence of Creativity and Information in Convergence Journalism: Pakistan's Evolving Media Landscape
Authors: Misha Mirza
Abstract:
In recent years, the definition of journalism in Pakistan has changed, so has the mindset of people and their approach towards a news story. For the audience, news has become more interesting than a drama or a film. This research thus provides an insight into Pakistan’s evolving media landscape. It tries not only to bring forth the outcomes of cross-platform cooperation among print and broadcast journalism but also gives an insight into the interactive data visualization techniques being used. The storytelling in journalism in Pakistan has evolved from depicting merely the truth to tweaking, fabricating and producing docu-dramas. It aims to look into how news is translated to a visual. Pakistan acquires a diverse cultural heritage and by engaging audience through media, this history translates into the storytelling platform today. The paper explains how journalists are thriving in a converging media environment and provides an analysis of the narratives in television talk shows today.’ Jack of all, master of none’ is being challenged by the journalists today. One has to be a quality information gatherer and an effective storyteller at the same time. Are journalists really looking more into what sells rather than what matters? Express Tribune is a very popular news platform among the youth. Not only is their newspaper more attractive than the competitors but also their style of narrative and interactive web stories lead to well-rounded news. Interviews are used as the basic methodology to get an insight into how data visualization is compassed. The quest for finding out the difference between visualization of information versus the visualization of knowledge has led the author to delve into the work of David McCandless in his book ‘Knowledge is beautiful’. Journalism in Pakistan has evolved from information to combining knowledge, infotainment and comedy. What is being criticized the most by the society most often becomes the breaking news. Circulation in today’s world is carried out in cultural and social networks. In recent times, we have come across many examples where people have gained overnight popularity by releasing songs with substandard lyrics or senseless videos perhaps because creativity has taken over information. This paper thus discusses the various platforms of convergence journalism from Pakistan’s perspective. The study concludes with proving how Pakistani pop culture Truck art is coexisting with all the platforms in convergent journalism. The changing media landscape thus challenges the basic rules of journalism. The slapstick humor and ‘jhatka’ in Pakistani talk shows has evolved from the Pakistani truck art poetry. Mobile journalism has taken over all the other mediums of journalism; however, the Pakistani culture coexists with the converging landscape.Keywords: convergence journalism in Pakistan, data visualization, interactive narrative in Pakistani news, mobile journalism, Pakistan's truck art culture
Procedia PDF Downloads 285227 A Left Testicular Cancer with Multiple Metastases Nursing Experience
Authors: Syue-Wen Lin
Abstract:
Objective:This article reviews the care experience of a 40-year-old male patient who underwent a thoracoscopic right lower lobectomy following a COVID-19 infection. His complex medical history included multiple metastases (lungs, liver, spleen, and left kidney) and lung damage from COVID-19, which complicated the weaning process from mechanical ventilation. The care involved managing cancer treatment, postoperative pain, wound care, and palliative care. Methods:Nursing care was provided from August 16 to August 17, 2024. Challenges included difficulty with sputum clearance, which exacerbated the patient's anxiety and fear of reintubation. Pain management strategies combined analgesic drugs, non-drug methods, essential oil massages with family members, and playing the patient’s favorite music to reduce pain and anxiety. Progressive rehabilitation began with stabilizing vital signs, followed by assistance with sitting on the edge of the bed and walking within the ward. Strict sterile procedures and advanced wound care technology were used for daily dressing changes, with meticulous documentation of wound conditions and appropriate dressing selection. Holistic cancer care and palliative measures were integrated to address the patient’s physical and psychological needs. Results:The interdisciplinary care team developed a comprehensive plan addressing both physical and psychological aspects. Respiratory therapy, lung expansion exercises, and a high-frequency chest wall oscillation vest facilitated sputum expulsion and assisted in weaning from mechanical ventilation. The integration of cancer care, pain management, wound care, and palliative care led to improved quality of life and recovery. The collaborative approach between nursing staff and family ensured that the patient received compassionate and effective care. Conclusion: The complex interplay of emergency surgery, COVID-19, and advanced cancer required a multifaceted care strategy. The care team’s approach, combining critical care with tailored cancer and palliative care, effectively improved the patient’s quality of life and facilitated recovery. The comprehensive care plan, developed with family collaboration, provided both high-quality medical care and compassionate support for the terminally ill patient.Keywords: multiple metastases, testicular cancer, palliative care, nursing experience
Procedia PDF Downloads 25226 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer
Authors: Ke Zhang, Yongli Zhu
Abstract:
Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm
Procedia PDF Downloads 264225 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection
Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan
Abstract:
Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.Keywords: computed tomography, barium sulfate shield, dose saving, image quality
Procedia PDF Downloads 269224 Integrating Computer-Aided Manufacturing and Computer-Aided Design for Streamlined Carpentry Production in Ghana
Authors: Benson Tette, Thomas Mensah
Abstract:
As a developing country, Ghana has a high potential to harness the economic value of every industry. Two of the industries that produce below capacity are handicrafts (for instance, carpentry) and information technology (i.e., computer science). To boost production and maintain competitiveness, the carpentry sector in Ghana needs more effective manufacturing procedures that are also more affordable. This issue can be resolved using computer-aided manufacturing (CAM) technology, which automates the fabrication process and decreases the amount of time and labor needed to make wood goods. Yet, the integration of CAM in carpentry-related production is rarely explored. To streamline the manufacturing process, this research investigates the equipment and technology that are currently used in the Ghanaian carpentry sector for automated fabrication. The research looks at the various CAM technologies, such as Computer Numerical Control routers, laser cutters, and plasma cutters, that are accessible to Ghanaian carpenters yet unexplored. We also investigate their potential to enhance the production process. To achieve the objective, 150 carpenters, 15 software engineers, and 10 policymakers were interviewed using structured questionnaires. The responses provided by the 175 respondents were processed to eliminate outliers and omissions were corrected using multiple imputations techniques. The processed responses were analyzed through thematic analysis. The findings showed that adaptation and integration of CAD software with CAM technologies would speed up the design-to-manufacturing process for carpenters. It must be noted that achieving such results entails first; examining the capabilities of current CAD software, then determining what new functions and resources are required to improve the software's suitability for carpentry tasks. Responses from both carpenters and computer scientists showed that it is highly practical and achievable to streamline the design-to-manufacturing process through processes such as modifying and combining CAD software with CAM technology. Making the carpentry-software integration program more useful for carpentry projects would necessitate investigating the capabilities of the current CAD software and identifying additional features in the Ghanaian ecosystem and tools that are required. In conclusion, the Ghanaian carpentry sector has a chance to increase productivity and competitiveness through the integration of CAM technology with CAD software. Carpentry companies may lower labor costs and boost production capacity by automating the fabrication process, giving them a competitive advantage. This study offers implementation-ready and representative recommendations for successful implementation as well as important insights into the equipment and technologies available for automated fabrication in the Ghanaian carpentry sector.Keywords: carpentry, computer-aided manufacturing (CAM), Ghana, information technology(IT)
Procedia PDF Downloads 98223 Bacteriocin-Antibiotic Synergetic Consortia: Augmenting Antimicrobial Activity and Expanding the Inhibition Spectrum of Vancomycin Resistant and Methicillin Resistant Staphylococcus aureus
Authors: Asma Bashir, Neha Farid, Kashif Ali, Kiran Fatima
Abstract:
Background: Bacteriocins are a subclass of antimicrobial peptides that are becoming extremely important in treatments. It is possible to utilise bacteriocins in place of or in addition to traditional antibiotics. It is possible to treat a variety of infections, including Vancomycin-Resistant Staphylococcus aureus (VRSA) and Methicillin-Resistant Staphylococcus aureus (MRSA), using the targeted spectrum of activity of these microorganisms. Method: This study aimed to examine the efficiency of antibiotics and bacteriocin against VRSA and MRSA. The effects of bacteriocins, such as enterocin KAE01, enterocin KAE03, enterocin KAE05, and enterocin KAE06 isolated from Enterococcus faecium strains, alone and in combination with vancomycin and methicillin antibiotics were examined. The selection technique utilized the minimum inhibitory concentrations (MICs) against Gram-positive indicator strain ATCC 6538 Methicillin-Resistant Staphylococcus aureus (MRSA) and indicator strain KSA 02 Vancomycin-Resistant Staphylococcus aureus (VRSA). Results: We report the isolation and identification of enterocins KAE01, KAE03, KAE05, and KAE06 from food isolates of Enterococcus faecium (KAE01, KAE03, KAE05, and KAE06). After isolating the protein, it was partially purified with ammonium sulphate precipitation and purified with fast protein liquid chromatography (FPLC) procedures. Combinations of enterocin KAE01, 1 citric acid, 1 lactic acid, and microcin J25, 1 reuterin, 1 citric acid, and microcin J25, 1 reuterin, 1 lactic acid shown synergistic benefits (FIC index = 0.5) against Vancomycin-Resistant Staphylococcus aureus (VRSA). In addition, a moderately synergistic (FIC index = 0.75) interaction was seen between pediocin PA-1, 1 citric acid, 1 lactic acid, and reuterin 1 citric acid, 1 lactic acid against L. ivanovii HPB28. In the presence of acids, nisin Z exhibited a modestly synergistic effect (FIC index = 0.625-0.75); however, it exhibited additive effects (FIC index = 1) when combined with reuterin or pediocin PA-1 against L. ivanovii HPB28. The efficacy of synergistic consortiums against Gram-positive bacteria was examined. Conclusion: Combining antimicrobials with various modes of action boosted efficacy and expanded the spectrum of inhibition, particularly against multidrug-resistant pathogens, according to our research.Keywords: Enterococcus faecium, bacteriocin, antimicrobial resistance, antagonistic activity, vancomycin-resistant Staphylococcus aureus, methicillin-resistant Staphylococcus aureus
Procedia PDF Downloads 150222 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model
Authors: Seydou Sinde
Abstract:
The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression
Procedia PDF Downloads 84221 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification
Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens
Abstract:
Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage
Procedia PDF Downloads 190220 GenAI Agents in Product Management: A Case Study from the Manufacturing Sector
Authors: Aron Witkowski, Andrzej Wodecki
Abstract:
Purpose: This study aims to explore the feasibility and effectiveness of utilizing Generative Artificial Intelligence (GenAI) agents as product managers within the manufacturing sector. It seeks to evaluate whether current GenAI capabilities can fulfill the complex requirements of product management and deliver comparable outcomes to human counterparts. Study Design/Methodology/Approach: This research involved the creation of a support application for product managers, utilizing high-quality sources on product management and generative AI technologies. The application was designed to assist in various aspects of product management tasks. To evaluate its effectiveness, a study was conducted involving 10 experienced product managers from the manufacturing sector. These professionals were tasked with using the application and providing feedback on the tool's responses to common questions and challenges they encounter in their daily work. The study employed a mixed-methods approach, combining quantitative assessments of the tool's performance with qualitative interviews to gather detailed insights into the user experience and perceived value of the application. Findings: The findings reveal that GenAI-based product management agents exhibit significant potential in handling routine tasks, data analysis, and predictive modeling. However, there are notable limitations in areas requiring nuanced decision-making, creativity, and complex stakeholder interactions. The case study demonstrates that while GenAI can augment human capabilities, it is not yet fully equipped to independently manage the holistic responsibilities of a product manager in the manufacturing sector. Originality/Value: This research provides an analysis of GenAI's role in product management within the manufacturing industry, contributing to the limited body of literature on the application of GenAI agents in this domain. It offers practical insights into the current capabilities and limitations of GenAI, helping organizations make informed decisions about integrating AI into their product management strategies. Implications for Academic and Practical Fields: For academia, the study suggests new avenues for research in AI-human collaboration and the development of advanced AI systems capable of higher-level managerial functions. Practically, it provides industry professionals with a nuanced understanding of how GenAI can be leveraged to enhance product management, guiding investments in AI technologies and training programs to bridge identified gaps.Keywords: generative artificial intelligence, GenAI, NPD, new product development, product management, manufacturing
Procedia PDF Downloads 52219 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 204218 Separate Collection System of Recyclables and Biowaste Treatment and Utilization in Metropolitan Area Finland
Authors: Petri Kouvo, Aino Kainulainen, Kimmo Koivunen
Abstract:
Separate collection system for recyclable wastes in the Helsinki region was ranked second best of European capitals. The collection system includes paper, cardboard, glass, metals and biowaste. Residual waste is collected and used in energy production. The collection system excluding paper is managed by the Helsinki Region Environmental Services HSY, a public organization owned by four municipalities (Helsinki, Espoo, Kauniainen and Vantaa). Paper collection is handled by the producer responsibility scheme. The efficiency of the collection system in the Helsinki region relies on a good coverage of door-to-door-collection. All properties with 10 or more dwelling units are required to source separate biowaste and cardboard. This covers about 75% of the population of the area. The obligation is extended to glass and metal in properties with 20 or more dwelling units. Other success factors include public awareness campaigns and a fee system that encourages recycling. As a result of waste management regulations for source separation of recyclables and biowaste, nearly 50 percent of recycling rate of household waste has been reached. For households and small and medium size enterprises, there is a sorting station fleet of five stations available. More than 50 percent of wastes received at sorting stations is utilized as material. The separate collection of plastic packaging in Finland will begin in 2016 within the producer responsibility scheme. HSY started supplementing the national bring point system with door-to-door-collection and pilot operations will begin in spring 2016. The result of plastic packages pilot project has been encouraging. Until the end of 2016, over 3500 apartment buildings have been joined the piloting, and more than 1800 tons of plastic packages have been collected separately. In the summer 2015 a novel partial flow digestion process combining digestion and tunnel composting was adopted for source separated household and commercial biowaste management. The product gas form digestion process is converted in to heat and electricity in piston engine and organic Rankine cycle process with very high overall efficiency. This paper describes the efficient collection system and discusses key success factors as well as main obstacles and lessons learned as well as the partial flow process for biowaste management.Keywords: biowaste, HSY, MSW, plastic packages, recycling, separate collection
Procedia PDF Downloads 218217 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 59216 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60215 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast
Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi
Abstract:
The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education
Procedia PDF Downloads 174214 Exploring the Entrepreneur-Function in Uncertainty: Towards a Revised Definition
Authors: Johan Esbach
Abstract:
The entrepreneur has traditionally been defined through various historical lenses, emphasising individual traits, risk-taking, speculation, innovation and firm creation. However, these definitions often fail to address the dynamic nature of the modern entrepreneurial functions, which respond to unpredictable uncertainties and transition to routine management as certainty is achieved. This paper proposes a revised definition, positioning the entrepreneur as a dynamic function rather than a human construct, that emerges to address specific uncertainties in economic systems, but fades once uncertainty is resolved. By examining historical definitions and its limitations, including the works of Cantillon, Say, Schumpeter, and Knight, this paper identifies a gap in literature and develops a generalised definition for the entrepreneur. The revised definition challenges conventional thought by shifting focus from static attributes such as alertness, traits, firm creation, etc., to a dynamic role that includes reliability, adaptation, scalability, and adaptability. The methodology of this paper employs a mixed approach, combining theoretical analysis and case study examination to explore the dynamic nature of the entrepreneurial function in relation to uncertainty. The selection of case studies includes companies like Airbnb, Uber, Netflix, and Tesla, as these firms demonstrate a clear transition from entrepreneurial uncertainty to routine certainty. The data from the case studies is then analysed qualitatively, focusing on the patterns of entrepreneurial function across the selected companies. These results are then validated using quantitative analysis, derived from an independent survey. The primary finding of the paper will validate the entrepreneur as a dynamic function rather than a static, human-centric role. In considering the transition from uncertainty to certainty in companies like Airbnb, Uber, Netflix, and Tesla, the study shows that the entrepreneurial function emerges explicitly to address market, technological, or social uncertainties. Once these uncertainties are resolved and a certainty in the operating environment is established, the need for the entrepreneurial function ceases, giving way to routine management and business operations. The paper emphasises the need for a definitive model that responds to the temporal and contextualised nature of the entrepreneur. In adopting the revised definition, the entrepreneur is positioned to play a crucial role in the reduction of uncertainties within economic systems. Once the uncertainties are addressed, certainty is manifested in new combinations or new firms. Finally, the paper outlines policy implications for fostering environments that enables the entrepreneurial function and transition theory.Keywords: dynamic function, uncertainty, revised definition, transition
Procedia PDF Downloads 24213 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously
Authors: S. Mehrab Amiri, Nasser Talebbeydokhti
Abstract:
Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme. In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations
Procedia PDF Downloads 188212 Integrating Qualitative and Behavioural Insights to Increase the Take-Up of an Education Savings Program for Low Income Canadians
Authors: Mathieu Audet, Monica Soliman, Emilie Eve Gravel, Rebecca Friesdorf
Abstract:
Access to higher education is critical for reducing social inequalities. The Canada Learning Bond (CLB) is a government savings incentive aimed at increasing higher education access for children of low income families by providing money toward a Registered Education Savings Plan. To better understand the educational and financial decision-making of low income families, Employment Social Development Canada conducted qualitative fieldwork with eligible parents and children, teachers, and community organizations promoting the Bond. Insights from this fieldwork were then used to develop letters to better target the needs and experiences of eligible families. In the present study, we conducted a randomized controlled trial with children ages 12 to 13, the oldest cohort of eligible children, to test the effectiveness of the new letters. Parents or caregivers of 150,088 eligible children were assigned to one of five letter conditions promoting the Bond or to a control condition that did not receive a letter. The letter conditions were: (a) the standard letter from past outreach, (b) a letter presenting the exact amount the child was eligible to receive, enhancing the salience of benefits, (c) a letter with a social norm, (d) a letter with an image emphasizing the feasibility of higher education by presenting the diversity of options (i.e., college, trade schools, apprenticeships) – many participants interviewed viewed that university was unfeasible, and (e) a letter minimizing references to 'saving' (i.e., not framing the Bond explicitly as a savings incentive) – a concept that did not resonate with low income families who felt they could not afford to save. The exact amount was also presented in letters (c) through (e). The letter minimizing references to 'saving' and presenting the exact amount had the highest net take-up rate at 6.6%, compared to 3.5% for the standard letter group. Furthermore, this trial’s BI-informed letters showed the largest impact on take-up so far, with a net take-up of 5.7% compared to 3.0% and 3.9% in the first two trials. This research highlights the value of mixed-method approaches combining qualitative and behavioural insights methods for developing context-sensitive interventions for social programs. By gaining a deeper understanding of the needs and experiences of program users through qualitative fieldwork, and then integrating these insights into behaviourally informed communications, we were able to increase take-up of an education savings program, which may ultimately improve access to higher education in children of low income families.Keywords: access to higher education, behavioral insights, government, innovation, mixed-methods, social programs
Procedia PDF Downloads 124211 From Servicescape to Servicespace: Qualitative Research in a Post-Cartesian Retail Context
Authors: Chris Houliez
Abstract:
This study addresses the complex dynamics of the modern retail environment, focusing on how the ubiquitous nature of mobile communication technologies has reshaped the shopper experience and tested the limits of the conventional "servicescape" concept commonly used to describe retail experiences. The objective is to redefine the conceptualization of retail space by introducing an approach to space that aligns with a retail context where physical and digital interactions are increasingly intertwined. To offer a more shopper-centric understanding of the retail experience, this study draws from phenomenology, particularly Henri Lefebvre’s work on the production of space. The presented protocol differs from traditional methodologies by not making assumptions about what constitutes a retail space. Instead, it adopts a perspective based on Lefebvre’s seminal work, which posits that space is not a three-dimensional container commonly referred to as “servicescape” but is actively produced through shoppers’ spatial practices. This approach allows for an in-depth exploration of the retail experience by capturing the everyday spatial practices of shoppers without preconceived notions of what constitutes a retail space. The designed protocol was tested with eight participants during 209 hours of day-long field trips, immersing the researcher into the shopper's lived experience by combining multiple data collection methods, including participant observation, videography, photography, and both pre-fieldwork and post-fieldwork interviews. By giving equal importance to both locations and connections, this study unpacked various spatial practices that contribute to the production of retail space. These findings highlight the relative inadequacy of some traditional retail space conceptualizations, which often fail to capture the fluid nature of contemporary shopping experiences. The study's emphasis on the customization process, through which shoppers optimize their retail experience by producing a “fully lived retail space,” offers a more comprehensive understanding of consumer shopping behavior in the digital age. In conclusion, this research presents a significant shift in the conceptualization of retail space. By employing a phenomenological approach rooted in Lefebvre’s theory, the study provides a more efficient framework to understand the retail experience in the age of mobile communication technologies. Although this research is limited by its small sample size and the demographic profile of participants, it offers valuable insights into the spatial practices of modern shoppers and their implications for retail researchers and retailers alike.Keywords: shopper behavior, mobile telecommunication technologies, qualitative research, servicescape, servicespace
Procedia PDF Downloads 25210 Changing MBA Identities: Using Critical Reflection inside and out in Finding a New Narrative
Authors: Keith Schofield, Leigh Morland
Abstract:
Storytelling is an established means of leadership and management development and is also considered a form of leadership of self and others in its own right. This study focuses on the utility of storytelling in the development of management narratives in an MBA programme; sources include programme participants as well as international recruiters, whose voices are often only heard in terms of economic contribution and globalisation. For many MBA candidates, the return to study requires the development of a new identity which complements their professional identity; each candidate has their own journey and expectations, the use of story can enable candidates to explore their aspirations and assumptions and give voice to previously unspoken ideas. For international recruitment, the story of market development and change must be captured if MBAs are to remain fit for purpose. If used effectively, story acts as a form of critical reflection that can inform the learning journeys of individuals, emerging identities as well as the ongoing design and development of programmes. The landscape of management education is shifting; the MBA begins to attract a different kind of candidate, some are younger than before, others are seeking validation for their existing work practices, yet more are entrepreneurial and wish to capitalise on an institutional experience to further their career. There is a shift in context, creating uncertainty and ambiguity for programme managers and recruiters, thus requiring institutions to create a new MBA narrative. This study utilises Lego SeriousPlay as the means to engaging programme participants and international agents in telling the story of their MBA. We asked MBA participants to tell the story of their leadership and management aspirations and compare these to stories of their development journeys, allowing for critical reflection of their respective development gaps. We asked international recruiters, who act as university agents and promote courses in the student’s country of origin, to explore their mental models of MBA candidates and their learning agenda. The purpose of this process was to explore the agent’s perception of the MBA programme and to articulate the student journey from a recruitment perspective. The paper’s unique contribution is in combining these stories in order to explore the assumptions that determine programme design. Data drawn from reflective statements together with images of Lego ‘builds’ created the opportunity for reflection between the mental models of these groups. Findings will inform the design of the MBA journey and experience; we review the extent to which the changing identities of learners are congruent with programme design. Data from international recruiters also determines the extent to which marketing and recruitment strategies identify with would be candidates.Keywords: critical reflection, programme management, recruitment, storytelling
Procedia PDF Downloads 226209 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 130208 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies
Authors: Praniil Nagaraj
Abstract:
This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis
Procedia PDF Downloads 48207 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area
Authors: Sara Saboonian, Pierre Filion
Abstract:
There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization
Procedia PDF Downloads 133206 Estimating Age in Deceased Persons from the North Indian Population Using Ossification of the Sternoclavicular Joint
Authors: Balaji Devanathan, Gokul G., Raveena Divya, Abhishek Yadav, Sudhir K. Gupta
Abstract:
Background: Age estimation is a common problem in administrative settings, medico legal cases, and among athletes competing in different sports. Age estimation is a problem in medico legal problems that arise in hospitals when there has been a criminal abortion, when consenting to surgery or a general physical examination, when there has been infanticide, impotence, sterility, etc. Medical imaging progress has benefited forensic anthropology in various ways, most notably in the area of determining bone age. An efficient method for researching the epiphyseal union and other differences in the body's bones and joints is multi-slice computed tomography. There isn't a significant database on Indians available. So to obtain an Indian based database author has performed this original study. Methodologies: The appearance and fusion of ossification centre of sternoclavicular joint is evaluated, and grades were assigned accordingly. Using MSCT scans, we examined the relationship between the age of the deceased and alterations in the sternoclavicular joint during the appearance and union in 500 instances, 327 men and 173 females, in the age range of 0 to 25 years. Results: According to our research in both the male and female groups, the ossification centre for the medial end of the clavicle first appeared between the ages of 18.5 and 17.1 respectively. The age range of the partial union was 20.4 and 20.2 years old. The earliest age of complete fusion was 23 years for males and 22 years for females. For fusion of their sternebrae into one, age range is 11–24 years for females and 17–24 years. The fusion of the third and fourth sternebrae was completed by 11 years. The fusions of the first and second and second and third sternebrae occur by the age of 17 years. Furthermore, correlation and reliability were carried out which yielded significant results. Conclusion: With numerous exceptions, the projected values are consistent with a large number of the previously developed age charts. These variations may be caused by the ethnic or regional heterogeneity in the ossification pattern among the population under study. The pattern of bone maturation did not significantly differ between the sexes, according to the study. The study's age range was 0 to 25 years, and for obvious reasons, the majority of the occurrences occurred in the last five years, or between 20 and 25 years of age. This resulted in a comparatively smaller study population for the 12–18 age group, where age estimate is crucial because of current legal requirements. It will require specialized PMCT research in this age range to produce population standard charts for age estimate. The medial end of the clavicle is one of several ossification foci that are being thoroughly investigated since they are challenging to assess with a traditional X-ray examination. Combining the two has been shown to be a valid result when it comes to raising the age beyond eighteen.Keywords: age estimation, sternoclavicular joint, medial clavicle, computed tomography
Procedia PDF Downloads 46205 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 68204 A Review of the Future of Sustainable Urban Water Supply in South Africa
Authors: Jeremiah Mutamba
Abstract:
Water is a critical resource for sustainable economic growth and social development. It enables societies to thrive and influences every urban center’s future. Thus, water must always be available in the right quantity and quality. However, in South Africa - a known physically water scarce nation – the future of sustainable urban supply of water may be in jeopardy. The country facing a water crisis influenced by insufficient infrastructure investment and maintenance, recurrent droughts and climate variation, human induced water quality deterioration, as well as growing lack of technical capacity in water institutions, particularly local municipalities. Aside of the eight metropolitan municipalities for the country, most municipalities struggle with provision of reliable water to their citizens. These municipalities contend with having now capable engineers, aging infrastructure with concomitant high system water losses (of 30% and upwards), coupled with growing water demand from expanding industries and population growth. Also, a significant portion (44%) of national water treatment plants are in critically poor condition, requiring urgent rehabilitation. Municipalities also struggle to raise funding to instate projects. All these factors militate against sustainable urban water supply in the country. Urgent mitigation measures are required. This paper seeks to review the extent of the current water supply challenges in South Africa’s urban centers, including searching for practical and cost-effective measures. The study followed a qualitative approach, combining desktop literature research, interviews with key sector stakeholders, and a workshop. Phenomenological data analysis technique was used to study and examine interview data and secondary desktop data. Preliminary findings established the building of technical or engineering capacity, reversal of the high physical water losses, rehabilitation of poor condition and dysfunctional water treatment works, diversification of water resource mix, and water scarcity awareness programs as possible practical solutions. Other proposed solutions include the use of performance-based or value-based contracting to fund initiatives to reduce high system water losses. Out-come based arrangements for revenue increasing water loss reduction projects were considered more practical in funding-stressed local municipalities. If proactively implemented in an integrated manner, these proposed solutions are likely to ensure sustainable urban water supply in South African urban centers in the future.Keywords: sustainable, water scarcity, water supply, South Africa
Procedia PDF Downloads 123203 Quantitative Analysis of the High-Value Bioactive Components of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)
Authors: Lara Marie Pangan Lo, Soo Im Chung, Yao Cheng Zhang, Xingyue Jin, Mi Young Kang
Abstract:
Being the world’s most consumed grain crop, rice (Oryza sativa L.) demands’ have increase and this prompted the development of new rice cultivars with high bio-functional properties than the commonly used white rice. Ordinary rice variety is already known to be a potential source for a number of nutritional as well as bioactive compounds. To further enhance the rice’s nutritive values, germination is done in addition to making it more tasty and palatable when cooked. Pigmented rice, on the other hand, has become increasingly popular in the recent years for their greater antioxidant potential and other nutraceutical properties which can help alleviate the occurrence of the increasing incidence of metabolic diseases. Combining these two (2) parameters, this research study is sought to quantitatively determine the pre-germinated and germinated quantities of the major bioactive compounds of South Korea’s newly developed purplish pigmented rice grain cultivar Superjami (SJ) and red pigmented rice grain Superhongmi (SH) and compare them against the non-pigmented Normal Brown (NB) rice variety. Powdered rice grain cultivars were subjected to 72-hour germination period and the quantities of GABA, γ-oryzanol, ferulic acid, tocopherol and tocotrienol homologues were compared against their pre-germinated condition using γ- amino butyric acid (GABA) analysis and High Performance Liquid Chromatography (HPLC). Results revealed the effectiveness of germination in enhancing the bioactive components in all rice samples. GABA contents in germinated rice cultivars increased by more than 10-fold following the order: SJ >SH >NB. In addition, purple rice variety (SJ) has higher total γ-oryzanol and ferulic acid contents which increased by > 2-fold after germination followed by the red cultivar SH then the control, NB. Germinated varieties also possess higher total tocotrienol content than their pre-germinated state. As for the total tocopherol content, SJ has higher quantity, but the red-pigmented SH (0.16 mg/kg) is shown to have lower total tocopherol content than the control rice NB (0.86 mg/kg). However, all tocopherol and tocotrienol homologues were present only in small amounts ( < 3.0 mg/kg) in all pre-germinated and germinated samples. In general, all of the analyzed pigmented rice cultivars were found to possess higher bioactive compounds than the control NB rice variety. Also, regardless of their strain, germinated rice samples have higher bioactive compounds than their pre-germinated counterparts. This only shows the effectiveness of germinating rice in enhancing bioactive constituents. Overall, these results suggest the potential of the pigmented rice varieties as natural source of nutraceuticals in bio-functional food development.Keywords: bioactive compounds, germinated rice, superhongmi, superjami
Procedia PDF Downloads 401202 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290