Search results for: combining%20flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 985

Search results for: combining%20flow

205 The Coexistence of Creativity and Information in Convergence Journalism: Pakistan's Evolving Media Landscape

Authors: Misha Mirza

Abstract:

In recent years, the definition of journalism in Pakistan has changed, so has the mindset of people and their approach towards a news story. For the audience, news has become more interesting than a drama or a film. This research thus provides an insight into Pakistan’s evolving media landscape. It tries not only to bring forth the outcomes of cross-platform cooperation among print and broadcast journalism but also gives an insight into the interactive data visualization techniques being used. The storytelling in journalism in Pakistan has evolved from depicting merely the truth to tweaking, fabricating and producing docu-dramas. It aims to look into how news is translated to a visual. Pakistan acquires a diverse cultural heritage and by engaging audience through media, this history translates into the storytelling platform today. The paper explains how journalists are thriving in a converging media environment and provides an analysis of the narratives in television talk shows today.’ Jack of all, master of none’ is being challenged by the journalists today. One has to be a quality information gatherer and an effective storyteller at the same time. Are journalists really looking more into what sells rather than what matters? Express Tribune is a very popular news platform among the youth. Not only is their newspaper more attractive than the competitors but also their style of narrative and interactive web stories lead to well-rounded news. Interviews are used as the basic methodology to get an insight into how data visualization is compassed. The quest for finding out the difference between visualization of information versus the visualization of knowledge has led the author to delve into the work of David McCandless in his book ‘Knowledge is beautiful’. Journalism in Pakistan has evolved from information to combining knowledge, infotainment and comedy. What is being criticized the most by the society most often becomes the breaking news. Circulation in today’s world is carried out in cultural and social networks. In recent times, we have come across many examples where people have gained overnight popularity by releasing songs with substandard lyrics or senseless videos perhaps because creativity has taken over information. This paper thus discusses the various platforms of convergence journalism from Pakistan’s perspective. The study concludes with proving how Pakistani pop culture Truck art is coexisting with all the platforms in convergent journalism. The changing media landscape thus challenges the basic rules of journalism. The slapstick humor and ‘jhatka’ in Pakistani talk shows has evolved from the Pakistani truck art poetry. Mobile journalism has taken over all the other mediums of journalism; however, the Pakistani culture coexists with the converging landscape.

Keywords: convergence journalism in Pakistan, data visualization, interactive narrative in Pakistani news, mobile journalism, Pakistan's truck art culture

Procedia PDF Downloads 259
204 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer

Authors: Ke Zhang, Yongli Zhu

Abstract:

Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.

Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm

Procedia PDF Downloads 229
203 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 246
202 Integrating Computer-Aided Manufacturing and Computer-Aided Design for Streamlined Carpentry Production in Ghana

Authors: Benson Tette, Thomas Mensah

Abstract:

As a developing country, Ghana has a high potential to harness the economic value of every industry. Two of the industries that produce below capacity are handicrafts (for instance, carpentry) and information technology (i.e., computer science). To boost production and maintain competitiveness, the carpentry sector in Ghana needs more effective manufacturing procedures that are also more affordable. This issue can be resolved using computer-aided manufacturing (CAM) technology, which automates the fabrication process and decreases the amount of time and labor needed to make wood goods. Yet, the integration of CAM in carpentry-related production is rarely explored. To streamline the manufacturing process, this research investigates the equipment and technology that are currently used in the Ghanaian carpentry sector for automated fabrication. The research looks at the various CAM technologies, such as Computer Numerical Control routers, laser cutters, and plasma cutters, that are accessible to Ghanaian carpenters yet unexplored. We also investigate their potential to enhance the production process. To achieve the objective, 150 carpenters, 15 software engineers, and 10 policymakers were interviewed using structured questionnaires. The responses provided by the 175 respondents were processed to eliminate outliers and omissions were corrected using multiple imputations techniques. The processed responses were analyzed through thematic analysis. The findings showed that adaptation and integration of CAD software with CAM technologies would speed up the design-to-manufacturing process for carpenters. It must be noted that achieving such results entails first; examining the capabilities of current CAD software, then determining what new functions and resources are required to improve the software's suitability for carpentry tasks. Responses from both carpenters and computer scientists showed that it is highly practical and achievable to streamline the design-to-manufacturing process through processes such as modifying and combining CAD software with CAM technology. Making the carpentry-software integration program more useful for carpentry projects would necessitate investigating the capabilities of the current CAD software and identifying additional features in the Ghanaian ecosystem and tools that are required. In conclusion, the Ghanaian carpentry sector has a chance to increase productivity and competitiveness through the integration of CAM technology with CAD software. Carpentry companies may lower labor costs and boost production capacity by automating the fabrication process, giving them a competitive advantage. This study offers implementation-ready and representative recommendations for successful implementation as well as important insights into the equipment and technologies available for automated fabrication in the Ghanaian carpentry sector.

Keywords: carpentry, computer-aided manufacturing (CAM), Ghana, information technology(IT)

Procedia PDF Downloads 67
201 Bacteriocin-Antibiotic Synergetic Consortia: Augmenting Antimicrobial Activity and Expanding the Inhibition Spectrum of Vancomycin Resistant and Methicillin Resistant Staphylococcus aureus

Authors: Asma Bashir, Neha Farid, Kashif Ali, Kiran Fatima

Abstract:

Background: Bacteriocins are a subclass of antimicrobial peptides that are becoming extremely important in treatments. It is possible to utilise bacteriocins in place of or in addition to traditional antibiotics. It is possible to treat a variety of infections, including Vancomycin-Resistant Staphylococcus aureus (VRSA) and Methicillin-Resistant Staphylococcus aureus (MRSA), using the targeted spectrum of activity of these microorganisms. Method: This study aimed to examine the efficiency of antibiotics and bacteriocin against VRSA and MRSA. The effects of bacteriocins, such as enterocin KAE01, enterocin KAE03, enterocin KAE05, and enterocin KAE06 isolated from Enterococcus faecium strains, alone and in combination with vancomycin and methicillin antibiotics were examined. The selection technique utilized the minimum inhibitory concentrations (MICs) against Gram-positive indicator strain ATCC 6538 Methicillin-Resistant Staphylococcus aureus (MRSA) and indicator strain KSA 02 Vancomycin-Resistant Staphylococcus aureus (VRSA). Results: We report the isolation and identification of enterocins KAE01, KAE03, KAE05, and KAE06 from food isolates of Enterococcus faecium (KAE01, KAE03, KAE05, and KAE06). After isolating the protein, it was partially purified with ammonium sulphate precipitation and purified with fast protein liquid chromatography (FPLC) procedures. Combinations of enterocin KAE01, 1 citric acid, 1 lactic acid, and microcin J25, 1 reuterin, 1 citric acid, and microcin J25, 1 reuterin, 1 lactic acid shown synergistic benefits (FIC index = 0.5) against Vancomycin-Resistant Staphylococcus aureus (VRSA). In addition, a moderately synergistic (FIC index = 0.75) interaction was seen between pediocin PA-1, 1 citric acid, 1 lactic acid, and reuterin 1 citric acid, 1 lactic acid against L. ivanovii HPB28. In the presence of acids, nisin Z exhibited a modestly synergistic effect (FIC index = 0.625-0.75); however, it exhibited additive effects (FIC index = 1) when combined with reuterin or pediocin PA-1 against L. ivanovii HPB28. The efficacy of synergistic consortiums against Gram-positive bacteria was examined. Conclusion: Combining antimicrobials with various modes of action boosted efficacy and expanded the spectrum of inhibition, particularly against multidrug-resistant pathogens, according to our research.

Keywords: Enterococcus faecium, bacteriocin, antimicrobial resistance, antagonistic activity, vancomycin-resistant Staphylococcus aureus, methicillin-resistant Staphylococcus aureus

Procedia PDF Downloads 128
200 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 62
199 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 164
198 GenAI Agents in Product Management: A Case Study from the Manufacturing Sector

Authors: Aron Witkowski, Andrzej Wodecki

Abstract:

Purpose: This study aims to explore the feasibility and effectiveness of utilizing Generative Artificial Intelligence (GenAI) agents as product managers within the manufacturing sector. It seeks to evaluate whether current GenAI capabilities can fulfill the complex requirements of product management and deliver comparable outcomes to human counterparts. Study Design/Methodology/Approach: This research involved the creation of a support application for product managers, utilizing high-quality sources on product management and generative AI technologies. The application was designed to assist in various aspects of product management tasks. To evaluate its effectiveness, a study was conducted involving 10 experienced product managers from the manufacturing sector. These professionals were tasked with using the application and providing feedback on the tool's responses to common questions and challenges they encounter in their daily work. The study employed a mixed-methods approach, combining quantitative assessments of the tool's performance with qualitative interviews to gather detailed insights into the user experience and perceived value of the application. Findings: The findings reveal that GenAI-based product management agents exhibit significant potential in handling routine tasks, data analysis, and predictive modeling. However, there are notable limitations in areas requiring nuanced decision-making, creativity, and complex stakeholder interactions. The case study demonstrates that while GenAI can augment human capabilities, it is not yet fully equipped to independently manage the holistic responsibilities of a product manager in the manufacturing sector. Originality/Value: This research provides an analysis of GenAI's role in product management within the manufacturing industry, contributing to the limited body of literature on the application of GenAI agents in this domain. It offers practical insights into the current capabilities and limitations of GenAI, helping organizations make informed decisions about integrating AI into their product management strategies. Implications for Academic and Practical Fields: For academia, the study suggests new avenues for research in AI-human collaboration and the development of advanced AI systems capable of higher-level managerial functions. Practically, it provides industry professionals with a nuanced understanding of how GenAI can be leveraged to enhance product management, guiding investments in AI technologies and training programs to bridge identified gaps.

Keywords: generative artificial intelligence, GenAI, NPD, new product development, product management, manufacturing

Procedia PDF Downloads 17
197 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 178
196 Separate Collection System of Recyclables and Biowaste Treatment and Utilization in Metropolitan Area Finland

Authors: Petri Kouvo, Aino Kainulainen, Kimmo Koivunen

Abstract:

Separate collection system for recyclable wastes in the Helsinki region was ranked second best of European capitals. The collection system includes paper, cardboard, glass, metals and biowaste. Residual waste is collected and used in energy production. The collection system excluding paper is managed by the Helsinki Region Environmental Services HSY, a public organization owned by four municipalities (Helsinki, Espoo, Kauniainen and Vantaa). Paper collection is handled by the producer responsibility scheme. The efficiency of the collection system in the Helsinki region relies on a good coverage of door-to-door-collection. All properties with 10 or more dwelling units are required to source separate biowaste and cardboard. This covers about 75% of the population of the area. The obligation is extended to glass and metal in properties with 20 or more dwelling units. Other success factors include public awareness campaigns and a fee system that encourages recycling. As a result of waste management regulations for source separation of recyclables and biowaste, nearly 50 percent of recycling rate of household waste has been reached. For households and small and medium size enterprises, there is a sorting station fleet of five stations available. More than 50 percent of wastes received at sorting stations is utilized as material. The separate collection of plastic packaging in Finland will begin in 2016 within the producer responsibility scheme. HSY started supplementing the national bring point system with door-to-door-collection and pilot operations will begin in spring 2016. The result of plastic packages pilot project has been encouraging. Until the end of 2016, over 3500 apartment buildings have been joined the piloting, and more than 1800 tons of plastic packages have been collected separately. In the summer 2015 a novel partial flow digestion process combining digestion and tunnel composting was adopted for source separated household and commercial biowaste management. The product gas form digestion process is converted in to heat and electricity in piston engine and organic Rankine cycle process with very high overall efficiency. This paper describes the efficient collection system and discusses key success factors as well as main obstacles and lessons learned as well as the partial flow process for biowaste management.

Keywords: biowaste, HSY, MSW, plastic packages, recycling, separate collection

Procedia PDF Downloads 190
195 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks

Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba

Abstract:

Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.

Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN

Procedia PDF Downloads 8
194 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 36
193 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast

Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi

Abstract:

The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.

Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education

Procedia PDF Downloads 153
192 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously

Authors: S. Mehrab Amiri, Nasser Talebbeydokhti

Abstract:

Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme.  In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.

Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations

Procedia PDF Downloads 168
191 Integrating Qualitative and Behavioural Insights to Increase the Take-Up of an Education Savings Program for Low Income Canadians

Authors: Mathieu Audet, Monica Soliman, Emilie Eve Gravel, Rebecca Friesdorf

Abstract:

Access to higher education is critical for reducing social inequalities. The Canada Learning Bond (CLB) is a government savings incentive aimed at increasing higher education access for children of low income families by providing money toward a Registered Education Savings Plan. To better understand the educational and financial decision-making of low income families, Employment Social Development Canada conducted qualitative fieldwork with eligible parents and children, teachers, and community organizations promoting the Bond. Insights from this fieldwork were then used to develop letters to better target the needs and experiences of eligible families. In the present study, we conducted a randomized controlled trial with children ages 12 to 13, the oldest cohort of eligible children, to test the effectiveness of the new letters. Parents or caregivers of 150,088 eligible children were assigned to one of five letter conditions promoting the Bond or to a control condition that did not receive a letter. The letter conditions were: (a) the standard letter from past outreach, (b) a letter presenting the exact amount the child was eligible to receive, enhancing the salience of benefits, (c) a letter with a social norm, (d) a letter with an image emphasizing the feasibility of higher education by presenting the diversity of options (i.e., college, trade schools, apprenticeships) – many participants interviewed viewed that university was unfeasible, and (e) a letter minimizing references to 'saving' (i.e., not framing the Bond explicitly as a savings incentive) – a concept that did not resonate with low income families who felt they could not afford to save. The exact amount was also presented in letters (c) through (e). The letter minimizing references to 'saving' and presenting the exact amount had the highest net take-up rate at 6.6%, compared to 3.5% for the standard letter group. Furthermore, this trial’s BI-informed letters showed the largest impact on take-up so far, with a net take-up of 5.7% compared to 3.0% and 3.9% in the first two trials. This research highlights the value of mixed-method approaches combining qualitative and behavioural insights methods for developing context-sensitive interventions for social programs. By gaining a deeper understanding of the needs and experiences of program users through qualitative fieldwork, and then integrating these insights into behaviourally informed communications, we were able to increase take-up of an education savings program, which may ultimately improve access to higher education in children of low income families.

Keywords: access to higher education, behavioral insights, government, innovation, mixed-methods, social programs

Procedia PDF Downloads 101
190 Changing MBA Identities: Using Critical Reflection inside and out in Finding a New Narrative

Authors: Keith Schofield, Leigh Morland

Abstract:

Storytelling is an established means of leadership and management development and is also considered a form of leadership of self and others in its own right. This study focuses on the utility of storytelling in the development of management narratives in an MBA programme; sources include programme participants as well as international recruiters, whose voices are often only heard in terms of economic contribution and globalisation. For many MBA candidates, the return to study requires the development of a new identity which complements their professional identity; each candidate has their own journey and expectations, the use of story can enable candidates to explore their aspirations and assumptions and give voice to previously unspoken ideas. For international recruitment, the story of market development and change must be captured if MBAs are to remain fit for purpose. If used effectively, story acts as a form of critical reflection that can inform the learning journeys of individuals, emerging identities as well as the ongoing design and development of programmes. The landscape of management education is shifting; the MBA begins to attract a different kind of candidate, some are younger than before, others are seeking validation for their existing work practices, yet more are entrepreneurial and wish to capitalise on an institutional experience to further their career. There is a shift in context, creating uncertainty and ambiguity for programme managers and recruiters, thus requiring institutions to create a new MBA narrative. This study utilises Lego SeriousPlay as the means to engaging programme participants and international agents in telling the story of their MBA. We asked MBA participants to tell the story of their leadership and management aspirations and compare these to stories of their development journeys, allowing for critical reflection of their respective development gaps. We asked international recruiters, who act as university agents and promote courses in the student’s country of origin, to explore their mental models of MBA candidates and their learning agenda. The purpose of this process was to explore the agent’s perception of the MBA programme and to articulate the student journey from a recruitment perspective. The paper’s unique contribution is in combining these stories in order to explore the assumptions that determine programme design. Data drawn from reflective statements together with images of Lego ‘builds’ created the opportunity for reflection between the mental models of these groups. Findings will inform the design of the MBA journey and experience; we review the extent to which the changing identities of learners are congruent with programme design. Data from international recruiters also determines the extent to which marketing and recruitment strategies identify with would be candidates.

Keywords: critical reflection, programme management, recruitment, storytelling

Procedia PDF Downloads 201
189 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 106
188 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area

Authors: Sara Saboonian, Pierre Filion

Abstract:

There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.

Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization

Procedia PDF Downloads 114
187 Estimating Age In Deceased Persons From The North Indian Population Using Ossification Of The Sternoclavicular Joint

Authors: Balaji Devanathan, Gokul G, Raveena Divya, Abhishek Yadav, Sudhir K.Gupta

Abstract:

Background: Age estimation is a common problem in administrative settings, medico legal cases, and among athletes competing in different sports. Age estimation is a problem in medico legal problems that arise in hospitals when there has been a criminal abortion, when consenting to surgery or a general physical examination, when there has been infanticide, impotence, sterility, etc. Medical imaging progress has benefited forensic anthropology in various ways, most notably in the area of determining bone age. An efficient method for researching the epiphyseal union and other differences in the body's bones and joints is multi-slice computed tomography. There isn't a significant database on Indians available. So to obtain an Indian based database author has performed this original study. Methodologies: The appearance and fusion of ossification centre of sternoclavicular joint is evaluated, and grades were assigned accordingly. Using MSCT scans, we examined the relationship between the age of the deceased and alterations in the sternoclavicular joint during the appearance and union in 500 instances, 327 men and 173 females, in the age range of 0 to 25 years. Results: According to our research in both the male and female groups, the ossification centre for the medial end of the clavicle first appeared between the ages of 18.5 and 17.1 respectively. The age range of the partial union was 20.4 and 20.2 years old. The earliest age of complete fusion was 23 years for males and 22 years for females. For fusion of their sternebrae into one, age range is 11–24 years for females and 17–24 years. The fusion of the third and fourth sternebrae was completed by 11 years. The fusions of the first and second and second and third sternebrae occur by the age of 17 years. Furthermore, correlation and reliability were carried out which yielded significant results. Conclusion: With numerous exceptions, the projected values are consistent with a large number of the previously developed age charts. These variations may be caused by the ethnic or regional heterogeneity in the ossification pattern among the population under study. The pattern of bone maturation did not significantly differ between the sexes, according to the study. The study's age range was 0 to 25 years, and for obvious reasons, the majority of the occurrences occurred in the last five years, or between 20 and 25 years of age. This resulted in a comparatively smaller study population for the 12–18 age group, where age estimate is crucial because of current legal requirements. It will require specialized PMCT research in this age range to produce population standard charts for age estimate. The medial end of the clavicle is one of several ossification foci that are being thoroughly investigated since they are challenging to assess with a traditional X-ray examination. Combining the two has been shown to be a valid result when it comes to raising the age beyond eighteen.

Keywords: age estimation, sternoclavicular joint, medial clavicle, computed tomography

Procedia PDF Downloads 21
186 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 39
185 A Review of the Future of Sustainable Urban Water Supply in South Africa

Authors: Jeremiah Mutamba

Abstract:

Water is a critical resource for sustainable economic growth and social development. It enables societies to thrive and influences every urban center’s future. Thus, water must always be available in the right quantity and quality. However, in South Africa - a known physically water scarce nation – the future of sustainable urban supply of water may be in jeopardy. The country facing a water crisis influenced by insufficient infrastructure investment and maintenance, recurrent droughts and climate variation, human induced water quality deterioration, as well as growing lack of technical capacity in water institutions, particularly local municipalities. Aside of the eight metropolitan municipalities for the country, most municipalities struggle with provision of reliable water to their citizens. These municipalities contend with having now capable engineers, aging infrastructure with concomitant high system water losses (of 30% and upwards), coupled with growing water demand from expanding industries and population growth. Also, a significant portion (44%) of national water treatment plants are in critically poor condition, requiring urgent rehabilitation. Municipalities also struggle to raise funding to instate projects. All these factors militate against sustainable urban water supply in the country. Urgent mitigation measures are required. This paper seeks to review the extent of the current water supply challenges in South Africa’s urban centers, including searching for practical and cost-effective measures. The study followed a qualitative approach, combining desktop literature research, interviews with key sector stakeholders, and a workshop. Phenomenological data analysis technique was used to study and examine interview data and secondary desktop data. Preliminary findings established the building of technical or engineering capacity, reversal of the high physical water losses, rehabilitation of poor condition and dysfunctional water treatment works, diversification of water resource mix, and water scarcity awareness programs as possible practical solutions. Other proposed solutions include the use of performance-based or value-based contracting to fund initiatives to reduce high system water losses. Out-come based arrangements for revenue increasing water loss reduction projects were considered more practical in funding-stressed local municipalities. If proactively implemented in an integrated manner, these proposed solutions are likely to ensure sustainable urban water supply in South African urban centers in the future.

Keywords: sustainable, water scarcity, water supply, South Africa

Procedia PDF Downloads 104
184 Quantitative Analysis of the High-Value Bioactive Components of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)

Authors: Lara Marie Pangan Lo, Soo Im Chung, Yao Cheng Zhang, Xingyue Jin, Mi Young Kang

Abstract:

Being the world’s most consumed grain crop, rice (Oryza sativa L.) demands’ have increase and this prompted the development of new rice cultivars with high bio-functional properties than the commonly used white rice. Ordinary rice variety is already known to be a potential source for a number of nutritional as well as bioactive compounds. To further enhance the rice’s nutritive values, germination is done in addition to making it more tasty and palatable when cooked. Pigmented rice, on the other hand, has become increasingly popular in the recent years for their greater antioxidant potential and other nutraceutical properties which can help alleviate the occurrence of the increasing incidence of metabolic diseases. Combining these two (2) parameters, this research study is sought to quantitatively determine the pre-germinated and germinated quantities of the major bioactive compounds of South Korea’s newly developed purplish pigmented rice grain cultivar Superjami (SJ) and red pigmented rice grain Superhongmi (SH) and compare them against the non-pigmented Normal Brown (NB) rice variety. Powdered rice grain cultivars were subjected to 72-hour germination period and the quantities of GABA, γ-oryzanol, ferulic acid, tocopherol and tocotrienol homologues were compared against their pre-germinated condition using γ- amino butyric acid (GABA) analysis and High Performance Liquid Chromatography (HPLC). Results revealed the effectiveness of germination in enhancing the bioactive components in all rice samples. GABA contents in germinated rice cultivars increased by more than 10-fold following the order: SJ >SH >NB. In addition, purple rice variety (SJ) has higher total γ-oryzanol and ferulic acid contents which increased by > 2-fold after germination followed by the red cultivar SH then the control, NB. Germinated varieties also possess higher total tocotrienol content than their pre-germinated state. As for the total tocopherol content, SJ has higher quantity, but the red-pigmented SH (0.16 mg/kg) is shown to have lower total tocopherol content than the control rice NB (0.86 mg/kg). However, all tocopherol and tocotrienol homologues were present only in small amounts ( < 3.0 mg/kg) in all pre-germinated and germinated samples. In general, all of the analyzed pigmented rice cultivars were found to possess higher bioactive compounds than the control NB rice variety. Also, regardless of their strain, germinated rice samples have higher bioactive compounds than their pre-germinated counterparts. This only shows the effectiveness of germinating rice in enhancing bioactive constituents. Overall, these results suggest the potential of the pigmented rice varieties as natural source of nutraceuticals in bio-functional food development.

Keywords: bioactive compounds, germinated rice, superhongmi, superjami

Procedia PDF Downloads 366
183 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 269
182 The Application of Raman Spectroscopy in Olive Oil Analysis

Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli

Abstract:

Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.

Keywords: authentication, chemometrics, olive oil, raman spectroscopy

Procedia PDF Downloads 304
181 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 344
180 The Social Aspects of Code-Switching in Online Interaction: The Case of Saudi Bilinguals

Authors: Shirin Alabdulqader

Abstract:

This research aims to investigate the concept of code-switching (CS) between English, Arabic, and the CS practices of Saudi online users via a Translanguaging (TL) lens for more inclusive view towards the nature of the data from the study. It employs Digitally Mediated Communication (DMC), specifically the WhatsApp and Twitter platforms, in order to understand how the users employ online resources to communicate with others on a daily basis. This project looks beyond language and considers the multimodal affordances (visual and audio means) that interlocutors utilise in their online communicative practices to shape their online social existence. This exploratory study is based on a data-driven interpretivist epistemology as it aims to understand how meaning (reality) is created by individuals within different contexts. This project used a mixed-method approach, combining a qualitative and a quantitative approach. In the former, data were collected from online chats and interview responses, while in the latter a questionnaire was employed to understand the frequency and relations between the participants’ linguistic and non-linguistic practices and their social behaviours. The participants were eight bilingual Saudi nationals (both men and women, aged between 20 and 50 years old) who interacted with others online. These participants provided their online interactions, participated in an interview and responded to a questionnaire. The study data were gathered from 194 WhatsApp chats and 122 Tweets. These data were analysed and interpreted according to three levels: conversational turn taking and CS; the linguistic description of the data; and CS and persona. This project contributes to the emerging field of analysing online Arabic data systematically, and the field of multimodality and bilingual sociolinguistics. The findings are reported for each of the three levels. For conversational turn taking, the CS analysis revealed that it was used to accomplish negotiation and develop meaning in the conversation. With regard to the linguistic practices of the CS data, the majority of the code-switched words were content morphemes. The third level of data interpretation is CS and its relationship with identity; two types of identity were indexed; absolute identity and contextual identity. This study contributes to the DMC literature and bridges some of the existing gaps. The findings of this study are that CS by its nature, and most of the findings, if not all, support the notion of TL that multiliteracy is one’s ability to decode multimodal communication, and that this multimodality contributes to the meaning. Either this is applicable to the online affordances used by monolinguals or multilinguals and perceived not only by specific generations but also by any online multiliterates, the study provides the linguistic features of CS utilised by Saudi bilinguals and it determines the relationship between these features and the contexts in which they appear.

Keywords: social media, code-switching, translanguaging, online interaction, saudi bilinguals

Procedia PDF Downloads 106
179 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses

Authors: Orit Baruth, Anat Cohen

Abstract:

Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.

Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions

Procedia PDF Downloads 80
178 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout

Authors: Diamant Irene, Shklarnik Batya

Abstract:

In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.

Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19

Procedia PDF Downloads 167
177 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures

Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar

Abstract:

In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.

Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization

Procedia PDF Downloads 173
176 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 52