Search results for: Src SH2 domain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1781

Search results for: Src SH2 domain

41 Rural-To-Urban Migrants' Experiences with Primary Care in Four Types of Medical Institutions in Guangzhou, China

Authors: Jiazhi Zeng, Leiyu Shi, Xia Zou, Wen Chen, Li Ling

Abstract:

Background: China is facing the unprecedented challenge of rapidly increasing rural-to-urban migration. Due to the household registration system, migrants are in a vulnerable state when they attempt to access to primary care services. A strong primary care system can reduce health inequities and mitigate socioeconomic disparities in healthcare utilization. Literature indicated that migrants were more reliant on the primary care system than local residents. Although the Chinese government has attached great importance to creating an efficient health system, primary care services are still underutilized. The referral system between primary care institutions and hospitals has not yet been completely established in China. The general populations often go directly to hospitals instead of primary care institutions for their primary care. Primary care institutions generally consist of community health centers (CHCs) and community health stations (CHSs) in urban areas, and township health centers (THCs) and rural health stations (THSs) in rural areas. In addition, primary care services are also provided by the outpatient department of municipal hospitals and tertiary hospitals. A better understanding of migrants’ experiences with primary care in the above-mentioned medical institutions is critical for improving the performance of primary care institutions and providing indications of the attributes that require further attention. The purpose of this pioneering study is to explore rural-to-urban migrants’ experiences in primary care, compare their primary care experiences in four types of medical institutions in Guangzhou, China, and suggest implications for targeted interventions to improve primary care for the migrants. Methods: This was a cross-sectional study conducted with 736 rural-to-urban migrants in Guangzhou, China, in 2014. A multistage sampling method was employed. A validated Chinese version of Primary Care Assessment Tool - Adult Short Version (PCAT-AS) was used to collect information on migrants’ primary care experiences. The PCAT-AS consists of 10 domains. Analysis of covariance was conducted for comparison on PCAT domain scores and total scores among migrants accessing four types of medical institutions. Multiple linear regression models were used to explore factors associated with PCAT total scores. Results: After controlling for socio-demographic characteristics, migrant characteristics, health status and health insurance status, migrants accessing primary care in tertiary hospitals had the highest PCAT total scores when compared with those accessing primary care THCs/ RHSs (25.49 vs. 24.18, P=0.007) and CHCs/ CHSs(25.49 vs. 24.24, P=0.006). There was no statistical significant difference for PCAT total scores between migrants accessing primary care in CHCs/CHSs and those in municipal hospitals (24.24 vs. 25.02, P=0.436). Factors positively associated with higher PCAT total scores also included insurance covering parts of healthcare payment (P < 0.001). Conclusions: This study highlights the need for improvement in primary care provided by primary care institutions for rural-to-urban migrants. Migrants receiving primary care from THCs, RHSs, CHSs and CHSs reported worse primary care experiences than those receiving primary care from tertiary hospitals. Relevant policies related to medical insurance should be implemented for providing affordable healthcare services for migrants accessing primary care. Further research exploring the specific reasons for poorer PCAT scores of primary care institutions users will be needed.

Keywords: China, PCAT, primary care, rural-to-urban migrants

Procedia PDF Downloads 357
40 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 230
39 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix

Authors: Natia Jalagonia, Tinatin Kuchukhidze

Abstract:

Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculated

Keywords: synthesis, PMHS, membrane, electrolyte

Procedia PDF Downloads 258
38 Impact of School Environment on Socio-Affective Development: A Quasi-Experimental Longitudinal Study of Urban and Suburban Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Richard B. Speaker, Pat Austin

Abstract:

This study used two psychological scales to examine the level of social and emotional intelligence and moral judgment of over 500 gifted and talented high school students in various academic and creative arts programs in a large metropolitan area in the southeastern United States. For decades, numerous models and programs purporting to encourage socio-affective characteristics of adolescent development have been explored in curriculum theory and design. Socio-affective merges social, emotional, and moral domains. It encompasses interpersonal relations and social behaviors; development and regulation of emotions; personal and gender identity construction; empathy development; moral development, thinking, and judgment. Examining development in these socio-affective domains can provide insight into why some gifted and talented adolescents are not successful in adulthood despite advanced IQ scores. Particularly whether nonintellectual characteristics of gifted and talented individuals, such as emotional, social and moral capabilities, are as advanced as their intellectual abilities and how these are related to each other. Unique characteristics distinguish gifted and talented individuals; these may appear as strengths, but there is the potential for problems to accompany them. Although many thrive in their school environments, some gifted students struggle rather than flourish. In the socio-affective domain, these adolescents face special intrapersonal, interpersonal, and environmental problems. Gifted individuals’ cognitive, psychological, and emotional development occurs asynchronously, in multidimensional layers at different rates and unevenly across ability levels. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of gifted and talented adolescents. This quasi-experimental longitudinal study examined students in several gifted and talented education programs (creative arts school, urban charter schools, and suburban public schools) for (1) socio-affective development level and (2) whether a particular gifted and talented program encourages developmental growth. The following research questions guided the study: (1) How do academically and artistically talented gifted 10th and 11th grade students perform on psychometric scales of social and emotional intelligence and moral judgment? Do they differ from their age or grade normative sample? Are their gender differences among gifted students? (2) Does school environment impact 10th and 11th grade gifted and talented students’ socio-affective development? Do gifted adolescents who participate in a particular school gifted program differ in their developmental profiles of social and emotional intelligence and moral judgment? Students’ performances on psychometric instruments were compared over time and by type of program. Participants took pre-, mid-, and post-tests over the course of an academic school year with Defining Issues Test (DIT-2) assessing moral judgment and BarOn EQ-I: YV assessing social and emotional intelligence. Based on these assessments, quantitative differences in growth on psychological scales (individual and school) were examined. Change scores between schools were also compared. If a school showed change, artifacts (culture, curricula, instructional methodology) provided insight as to environmental qualities that produced this difference.

Keywords: gifted and talented education, moral development, socio-affective development, socio-affective education

Procedia PDF Downloads 163
37 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education

Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant

Abstract:

In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.

Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs

Procedia PDF Downloads 142
36 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing

Authors: Arjun Kumar Rath, Titus Dhanasingh

Abstract:

Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.

Keywords: board diagnostics software, embedded system, hardware testing, test frameworks

Procedia PDF Downloads 147
35 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.

Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability

Procedia PDF Downloads 77
34 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 61
33 Analysis of Minimizing Investment Risks in Power and Energy Business Development by Combining Total Quality Management and International Financing Institutions Project Management Tools

Authors: M. Radunovic

Abstract:

Region of Southeastern Europe has a substantial energy resource potential and is witnessing an increasing rate of power and energy project investments. This comes as a result of countries harmonizing their legal framework and market regulations to conform the ones of European Union, enabling direct private investments. Funding in the power and energy market in this region originates from various resources and investment entities, including commercial and institutional ones. Risk anticipation and assessment is crucial to project success, especially given the long exploitation period of project in power and energy domain, as well as the wide range of stakeholders involved. This paper analyzes the possibility of combined application of tools used in total quality management and international financing institutions for project planning, execution and evaluation, with the goal of anticipating, assessing and minimizing the risks that might occur in the development and execution phase of a power and energy project in the market of southeastern Europe. History of successful project management and investments both in the industry and institutional sector provides sufficient experience, guidance and internationally adopted tools to provide proper project assessment for investments in power and energy. Business environment of southeastern Europe provides immense potential for developing power and engineering projects of various magnitudes, depending on stakeholders’ interest. Diversification on investment sources provides assurance that there is interest and commitment to invest in this market. Global economic and political developments will be intensifying the pace of investments in the upcoming period. The proposed approach accounts for key parameters that contribute to the sustainability and profitability of a project which include technological, educational, social and economic gaps between the southeastern European region and western Europe, market trends in equipment design and production on a global level, environment friendly approach to renewable energy sources as well as conventional power generation systems, and finally the effect of the One Belt One Road Initiative led by People’s Republic of China to the power and energy market of this region in the upcoming period on a long term scale. Analysis will outline the key benefits of the approach as well as the accompanying constraints. Parallel to this it will provide an overview of dominant threats and opportunities in present and future business environment and their influence to the proposed application. Through concrete examples, full potential of this approach will be presented along with necessary improvements that need to be implemented. Number of power and engineering projects being developed in southeastern Europe will be increasing in the upcoming period. Proper risk analysis will lead to minimizing project failures. The proposed successful combination of reliable project planning tools from different investment areas can prove to be beneficial in the future power and engineering investments, and guarantee their sustainability and profitability.

Keywords: capital investments, lean six sigma, logical framework approach, logical framework matrix, one belt one road initiative, project management tools, quality function deployment, Southeastern Europe, total quality management

Procedia PDF Downloads 109
32 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 231
31 Assessing Flexural Damage Mechanisms Induced by Mesoscopic Buckle Defects in Textile-Reinforced Polymer Matrix Composites Using Acoustic Emission Analysis

Authors: Christopher Okechukwu Ndukwe

Abstract:

This paper investigates and categorizes the flexural damage mechanisms in composite materials caused by mesoscopic out-of-plane buckle defects that occur during the initial stage of the resin transfer molding (RTM) process. The findings of this study have significant practical implications for the manufacturing and use of composite materials, as they provide a deeper understanding of these damage mechanisms and their analysis. During the initial stage of shaping a preform, alterations, and distortions in the reinforcement sample can significantly lead to defects, such as buckling, especially when forming double-curvature geometries. These recurring mesoscopic defects have been investigated using a specialized laboratory bench designed to reproduce buckle defects like those found in complex geometric shapes, such as tetrahedrons. The study examined two sample configurations with buckle defects in the longitudinal and transverse directions alongside a reference sample for comparison. An acoustic emission (AE) system, a well-regarded non-contact method for monitoring structural health, was used to analyze the mechanical behavior of material samples in detail. An unsupervised K-means algorithm was employed to classify the damage mechanisms—such as matrix cracking, interface damage, and fiber breakage linked to the samples' failure. A standard was established based on three AE parameters: absolute energy, amplitude, and the number of AE events. This standard helped identify the origin and sequence of damage propagation. Initially, the results of the AE parameters were superimposed with the flexural loading curves to pinpoint the loading phases during which damage began and the specific points at which the samples ultimately failed. The normalized density of AE events related to different damage mechanisms was evaluated by analyzing the number of AE events within the amplitude domain of the AE signals. The ranges of the identified damage mechanisms in the amplitude plane illustrate the progression and order of load transfer among the elements of the composite material. In the reference sample, the AE event signals corresponding to the three classes of damage mechanisms partially overlap with adjacent signals. In contrast, the two defective sample configurations showed that the overlapping AE event signals for the respective damage mechanisms converged within the intermediate damage mode area at specific points, depending on the sample configuration. The convergence points in the samples with transverse defects were identified relatively earlier than in the other samples. Low and high amplitude ranges characterize the matrix cracking and fiber breakage damage mechanisms. The low amplitude damage occurred over a more extended length, while the high amplitude damage began much earlier. This results in the signals from both damage mechanisms converging at the center of the interface damage zone. This convergence suggests that all individual composite components fail concurrently at specific points in the defective samples, resulting in rapid fragmentation and ultimately contributing to failure. Overall, the results show that mesoscopic out-of-plane buckling in all directions affects the composite's flexural response, with more severe effects observed when the load is applied transversely.

Keywords: acoustic emission, composite reinforcement, damage mechanisms, mesoscopic buckle defects

Procedia PDF Downloads 11
30 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form

Authors: Claudia Trillo

Abstract:

Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.

Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie

Procedia PDF Downloads 106
29 Challenges, Practices, and Opportunities of Knowledge Management in Industrial Research Institutes: Lessons Learned from Flanders Make

Authors: Zhenmin Tao, Jasper De Smet, Koen Laurijssen, Jeroen Stuyts, Sonja Sioncke

Abstract:

Today, the quality of knowledge management (KM)become one of the underpinning factors in the success of an organization, as it determines the effectiveness of capitalizing the organization’s knowledge. Overall, KMin an organization consists of five aspects: (knowledge) creation, validation, presentation, distribution, and application. Among others, KM in research institutes is considered as the cornerstone as their activities cover all five aspects. Furthermore, KM in a research institute facilitates the steering committee to envision the future roadmap, identify knowledge gaps, and make decisions on future research directions. Likewise, KMis even more challenging in industrial research institutes. From a technical perspective, technology advancement in the past decades calls for combinations of breadth and depth in expertise that poses challenges in talent acquisition and, therefore, knowledge creation. From a regulatory perspective, the strict intellectual property protection from industry collaborators and/or the contractual agreements made by possible funding authoritiesform extra barriers to knowledge validation, presentation, and distribution. From a management perspective, seamless KM activities are only guaranteed by inter-disciplinary talents that combine technical background knowledge, management skills, and leadership, let alone international vision. From a financial perspective, the long feedback period of new knowledge, together with the massive upfront investment costs and low reusability of the fixed assets, lead to low RORC (return on research capital) that jeopardize KM practice. In this study, we aim to address the challenges, practices, and opportunitiesof KM in Flanders Make – a leading European research institute specialized in the manufacturing industry. In particular, the analyses encompass an internal KM project which involves functionalities ranging from management to technical domain experts. This wide range of functionalities provides comprehensive empirical evidence on the challenges and practices w.r.t.the abovementioned KMaspects. Then, we ground our analysis onto the critical dimensions ofKM–individuals, socio‐organizational processes, and technology. The analyses have three steps: First, we lay the foundation and define the environment of this study by briefing the KM roles played by different functionalities in Flanders Make. Second, we zoom in to the CoreLab MotionS where the KM project is located. In this step, given the technical domains covered by MotionS products, the challenges in KM will be addressed w.r.t. the five KM aspects and three critical dimensions. Third, by detailing the objectives, practices, results, and limitations of the MotionSKMproject, we justify the practices and opportunities derived in the execution ofKMw.r.t. the challenges addressed in the second step. The results of this study are twofold: First, a KM framework that consolidates past knowledge is developed. A library based on this framework can, therefore1) overlook past research output, 2) accelerate ongoing research activities, and 3) envision future research projects. Second, the challenges inKM on both individual (actions) level and socio-organizational level (e.g., interactions between individuals)are identified. By doing so, suggestions and guidelines will be provided in KM in the context of industrial research institute. To this end, the results in this study are reflected towards the findings in existing literature.

Keywords: technical knowledge management framework, industrial research institutes, individual knowledge management, socio-organizational knowledge management.

Procedia PDF Downloads 116
28 Implementation of a Web-Based Clinical Outcomes Monitoring and Reporting Platform across the Fortis Network

Authors: Narottam Puri, Bishnu Panigrahi, Narayan Pendse

Abstract:

Background: Clinical Outcomes are the globally agreed upon, evidence-based measurable changes in health or quality of life resulting from the patient care. Reporting of outcomes and its continuous monitoring provides an opportunity for both assessing and improving the quality of patient care. In 2012, International Consortium Of HealthCare Outcome Measurement (ICHOM) was founded which has defined global Standard Sets for measuring the outcome of various treatments. Method: Monitoring of Clinical Outcomes was identified as a pillar of Fortis’ core value of Patient Centricity. The project was started as an in-house developed Clinical Outcomes Reporting Portal by the Fortis Medical IT team. Standard sets of Outcome measurement developed by ICHOM were used. A pilot was run at Fortis Escorts Heart Institute from Aug’13 – Dec’13.Starting Jan’14, it was implemented across 11 hospitals of the group. The scope was hospital-wide and major clinical specialties: Cardiac Sciences, Orthopedics & Joint Replacement were covered. The internally developed portal had its limitations of report generation and also capturing of Patient related outcomes was restricted. A year later, the company provisioned for an ICHOM Certified Software product which could provide a platform for data capturing and reporting to ensure compliance with all ICHOM requirements. Post a year of the launch of the software; Fortis Healthcare has become the 1st Healthcare Provider in Asia to publish Clinical Outcomes data for the Coronary Artery Disease Standard Set comprising of Coronary Artery Bypass Graft and Percutaneous Coronary Interventions) in the public domain. (Jan 2016). Results: This project has helped in firmly establishing a culture of monitoring and reporting Clinical Outcomes across Fortis Hospitals. Given the diverse nature of the healthcare delivery model at Fortis Network, which comprises of hospitals of varying size and specialty-mix and practically covering the entire span of the country, standardization of data collection and reporting methodology is a huge achievement in itself. 95% case reporting was achieved with more than 90% data completion at the end of Phase 1 (March 2016). Post implementation the group now has one year of data from its own hospitals. This has helped identify the gaps and plan towards ways to bridge them and also establish internal benchmarks for continual improvement. Besides the value created for the group includes: 1. Entire Fortis community has been sensitized on the importance of Clinical Outcomes monitoring for patient centric care. Initial skepticism and cynicism has been countered by effective stakeholder engagement and automation of processes. 2. Measuring quality is the first step in improving quality. Data analysis has helped compare clinical results with best-in-class hospitals and identify improvement opportunities. 3. Clinical fraternity is extremely pleased to be part of this initiative and has taken ownership of the project. Conclusion: Fortis Healthcare is the pioneer in the monitoring of Clinical Outcomes. Implementation of ICHOM standards has helped Fortis Clinical Excellence Program in improving patient engagement and strengthening its commitment to its core value of Patient Centricity. Validation and certification of the Clinical Outcomes data by an ICHOM Certified Supplier adds confidence to its claim of being leaders in this space.

Keywords: clinical outcomes, healthcare delivery, patient centricity, ICHOM

Procedia PDF Downloads 239
27 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 294
26 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 117
25 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution

Authors: Abderrazak Bannari

Abstract:

Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.

Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing

Procedia PDF Downloads 229
24 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum

Authors: Divya Palaniappan

Abstract:

Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.

Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum

Procedia PDF Downloads 363
23 Empowering Women Entrepreneurs in Rural India through Developing Online Communities of Purpose Using Social Technologies

Authors: Jayanta Basak, Somprakash Bandyopadhyay, Parama Bhaumik, Siuli Roy

Abstract:

To solve the life and livelihood related problems of socially and economically backward rural women in India, several Women Self-Help Groups (WSHG) are formed in Indian villages. WSHGs are micro-communities (with 10-to 15 members) within a village community. WSHGs have been conceived not just to promote savings and provide credit, but also to act as a vehicle of change through the creation of women micro-entrepreneurs at the village level. However, in spite of huge investment and volume of people involved in the whole process, the success is still limited. Most of these entrepreneurial activities happen in small household workspaces where sales are limited to the inconsistent and unpredictable local markets. As a result, these entrepreneurs are perennially trapped in the vicious cycle of low risk taking ability, low investment capacity, low productivity, weak market linkages and low revenue. Market separation including customer-producer separation is one of the key problems in this domain. Researchers suggest that there are four types of market separation: (i) spatial, (ii) financial, (iii) temporal, and (iv) informational, which in turn impacts the nature of markets and marketing. In this context, a large group of intermediaries (the 'middleman') plays important role in effectively reducing the factors that separate markets by utilizing the resource of rural entrepreneurs, their products and thus, accelerate market development. The rural entrepreneurs are heavily dependent on these middlemen for marketing of their products and these middlemen exploit rural entrepreneurs by creating a huge informational separation between the rural producers and end-consumers in the market and thus hiding the profit margins. The objective of this study is to develop a transparent, online communities of purpose among rural and urban entrepreneurs using internet and web 2.0 technologies in order to decrease market separation and improve mutual awareness of available and potential products and market demands. Communities of purpose are groups of people who have an ability to influence, can share knowledge and learn from others, and be committed to achieving a common purpose. In this study, a cluster of SHG women located in a village 'Kandi' of West Bengal, India has been studied closely for six months. These women are primarily engaged in producing garments, soft toys, fabric painting on clothes, etc. These women were equipped with internet-enabled smart-phones where they can use chat applications in local language and common social networking websites like Facebook, Instagram, etc. A few handicraft experts and micro-entrepreneurs from the city (the 'seed') were included in their mobile messaging app group that enables the creation of a 'community of purpose' in order to share thoughts and ideas on product designs, market trends, and practices, and thus decrease the rural-urban market separation. After six months of regular group interaction in mobile messaging app among these rural-urban community members, it is observed that SHG women are empowered now to share their product images, design ideas, showcase, and promote their products in global marketplace using some common social networking websites through which they can also enhance and augment their community of purpose.

Keywords: communities of purpose, market separation, self-help group, social technologies

Procedia PDF Downloads 257
22 Chain Networks on Internationalization of SMEs: Co-Opetition Strategies in Agrifood Sector

Authors: Emilio Galdeano-Gómez, Juan C. Pérez-Mesa, Laura Piedra-Muñoz, María C. García-Barranco, Jesús Hernández-Rubio

Abstract:

The situation in which firms engage in simultaneous cooperation and competition with each other is a phenomenon known as co-opetition. This scenario has received increasing attention in business economics and management analyses. In the domain of supply chain networks and for small and medium-sized enterprises, SMEs, these strategies are of greater relevance given the complex environment of globalization and competition in open markets. These firms face greater challenges regarding technology and access to specific resources due to their limited capabilities and limited market presence. Consequently, alliances and collaborations with both buyers and suppliers prove to be key elements in overcoming these constraints. However, rivalry and competition are also regarded as major factors in successful internationalization processes, as they are drivers for firms to attain a greater degree of specialization and to improve efficiency, for example enabling them to allocate scarce resources optimally and providing incentives for innovation and entrepreneurship. The present work aims to contribute to the literature on SMEs’ internationalization strategies. The sample is constituted by a panel data of marketing firms from the Andalusian food sector and a multivariate regression analysis is developed, measuring variables of co-opetition and international activity. The hierarchical regression equations method has been followed, thus resulting in three estimated models: the first one excluding the variables indicative of channel type, while the latter two include the international retailer chain and wholesaler variable. The findings show that the combination of several factors leads to a complex scenario of inter-organizational relationships of cooperation and competition. In supply chain management analyses, these relationships tend to be classified as either buyer-supplier (vertical level) or supplier-supplier relationships (horizontal level). Several buyers and suppliers tend to participate in supply chain networks, and in which the form of governance (hierarchical and non-hierarchical) influences cooperation and competition strategies. For instance, due to their market power and/or their closeness to the end consumer, some buyers (e.g. large retailers in food markets) can exert an influence on the selection and interaction of several of their intermediate suppliers, thus endowing certain networks in the supply chain with greater stability. This hierarchical influence may in turn allow these suppliers to develop their capabilities (e.g. specialization) to a greater extent. On the other hand, for those suppliers that are outside these networks, this environment of hierarchy, characterized by a “hub firm” or “channel master”, may provide an incentive for developing their co-opetition relationships. These results prove that the analyzed firms have experienced considerable growth in sales to new foreign markets, mainly in Europe, dealing with large retail chains and wholesalers as main buyers. This supply industry is predominantly made up of numerous SMEs, which has implied a certain disadvantage when dealing with the buyers, as negotiations have traditionally been held on an individual basis and in the face of high competition among suppliers. Over recent years, however, cooperation among these marketing firms has become more common, for example regarding R&D, promotion, scheduling of production and sales.

Keywords: co-petition networks, international supply chain, maketing agrifood firms, SMEs strategies

Procedia PDF Downloads 80
21 Facilitating Primary Care Practitioners to Improve Outcomes for People With Oropharyngeal Dysphagia Living in the Community: An Ongoing Realist Review

Authors: Caroline Smith, Professor Debi Bhattacharya, Sion Scott

Abstract:

Introduction: Oropharyngeal Dysphagia (OD) effects around 15% of older people, however it is often unrecognised and under diagnosed until they are hospitalised. There is a need for primary care healthcare practitioners (HCPs) to assume a proactive role in identifying and managing OD to prevent adverse outcomes such as aspiration pneumonia. Understanding the determinants of primary care HCPs undertaking this new behaviour provides the intervention targets for addressing. This realist review, underpinned by the Theoretical Domains Framework (TDF), aims to synthesise relevant literature and develop programme theories to understand what interventions work, how they work and under what circumstances to facilitate HCPs to prevent harm from OD. Combining realist methodology with behavioural science will permit conceptualisation of intervention components as theoretical behavioural constructs, thus informing the design of a future behaviour change intervention. Furthermore, through the TDF’s linkage to a taxonomy of behaviour change techniques, we will identify corresponding behaviour change techniques to include in this intervention. Methods & analysis: We are following the five steps for undertaking a realist review: 1) clarify the scope 2) Literature search 3) appraise and extract data 4) evidence synthesis 5) evaluation. We have searched Medline, Google scholar, PubMed, EMBASE, CINAHL, AMED, Scopus and PsycINFO databases. We are obtaining additional evidence through grey literature, snowball sampling, lateral searching and consulting the stakeholder group. Literature is being screened, evaluated and synthesised in Excel and Nvivo. We will appraise evidence in relation to its relevance and rigour. Data will be extracted and synthesised according to its relation to Initial programme theories (IPTs). IPTs were constructed after the preliminary literature search, informed by the TDF and with input from a stakeholder group of patient and public involvement advisors, general practitioners, speech and language therapists, geriatricians and pharmacists. We will follow the Realist and Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) quality and publication standards to report study results. Results: In this ongoing review our search has identified 1417 manuscripts with approximately 20% progressing to full text screening. We inductively generated 10 IPTs that hypothesise practitioners require: the knowledge to spot the signs and symptoms of OD; the skills to provide initial advice and support; and access to resources in their working environment to support them conducting these new behaviours. We mapped the 10 IPTs to 8 TDF domains and then generated a further 12 IPTs deductively using domain definitions to fulfil the remaining 6 TDF domains. Deductively generated IPTs broadened our thinking to consider domains such as ‘Emotion,’ ‘Optimism’ and ‘Social Influence’, e.g. If practitioners perceive that patients, carers and relatives expect initial advice and support, then they will be more likely to provide this, because they will feel obligated to do so. After prioritisation with stakeholders using a modified nominal group technique approach, a maximum of 10 IPTs will progress to test against the literature.

Keywords: behaviour change, deglutition disorders, primary healthcare, realist review

Procedia PDF Downloads 86
20 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 93
19 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator

Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra

Abstract:

We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.

Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics

Procedia PDF Downloads 143
18 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program

Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner

Abstract:

Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY

Keywords: accessibility, COVID-19, recovery, testing

Procedia PDF Downloads 196
17 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 135
16 Experimental Characterisation of Composite Panels for Railway Flooring

Authors: F. Pedro, S. Dias, A. Tadeu, J. António, Ó. López, A. Coelho

Abstract:

Railway transportation is considered the most economical and sustainable way to travel. However, future mobility brings important challenges to railway operators. The main target is to develop solutions that stimulate sustainable mobility. The research and innovation goals for this domain are efficient solutions, ensuring an increased level of safety and reliability, improved resource efficiency, high availability of the means (train), and satisfied passengers with the travel comfort level. These requirements are in line with the European Strategic Agenda for the 2020 rail sector, promoted by the European Rail Research Advisory Council (ERRAC). All these aspects involve redesigning current equipment and, in particular, the interior of the carriages. Recent studies have shown that two of the most important requirements for passengers are reasonable ticket prices and comfortable interiors. Passengers tend to use their travel time to rest or to work, so train interiors and their systems need to incorporate features that meet these requirements. Among the various systems that integrate train interiors, the flooring system is one of the systems with the greatest impact on passenger safety and comfort. It is also one of the systems that takes more time to install on the train, and which contributes seriously to the weight (mass) of all interior systems. Additionally, it presents a strong impact on manufacturing costs. The design of railway floor, in the development phase, is usually made relying on a design software that allows to draw and calculate several solutions in a short period of time. After obtaining the best solution, considering the goals previously defined, experimental data is always necessary and required. This experimental phase has such great significance, that its outcome can provoke the revision of the designed solution. This paper presents the methodology and some of the results of an experimental characterisation of composite panels for railway application. The mechanical tests were made for unaged specimens and for specimens that suffered some type of aging, i.e. heat, cold and humidity cycles or freezing/thawing cycles. These conditionings aim to simulate not only the time effect, but also the impact of severe environmental conditions. Both full solutions and separated components/materials were tested. For the full solution, (panel) these were: four-point bending tests, tensile shear strength, tensile strength perpendicular to the plane, determination of the spreading of water, and impact tests. For individual characterisation of the components, more specifically for the covering, the following tests were made: determination of the tensile stress-strain properties, determination of flexibility, determination of tear strength, peel test, tensile shear strength test, adhesion resistance test and dimensional stability. The main conclusions were that experimental characterisation brings a huge contribution to understand the behaviour of the materials both individually and assembled. This knowledge contributes to the increase the quality and improvements of premium solutions. This research work was framed within the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through the COMPETE 2020.

Keywords: durability, experimental characterization, mechanical tests, railway flooring system

Procedia PDF Downloads 155
15 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 34
14 Effects of School Culture and Curriculum on Gifted Adolescent Moral, Social, and Emotional Development: A Longitudinal Study of Urban Charter Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Pat J. Austin, Marc P. Bonis, Richard B. Speaker, Jr.

Abstract:

Using two psychometric instruments, this study examined social and emotional intelligence and moral judgment levels of more than 300 gifted and talented high school students enrolled in arts-integrated, academic acceleration, and creative arts charter schools in an ethnically diverse large city in the southeastern United States. Gifted and talented individuals possess distinguishable characteristics; these frequently appear as strengths, but often serious problems accompany them. Although many gifted adolescents thrive in their environments, some struggle in their school and community due to emotional intensity, motivation and achievement issues, lack of peers and isolation, identification problems, sensitivity to expectations and feelings, perfectionism, and other difficulties. These gifted students endure and survive in school rather than flourish. Gifted adolescents face special intrapersonal, interpersonal, and environmental problems. Furthermore, they experience greater levels of stress, disaffection, and isolation than non-gifted individuals due to their advanced cognitive abilities. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of these adolescents. Numerous studies have researched moral, social, and emotional development in the areas of cognitive-developmental, psychoanalytic, and behavioral learning; however, in almost all cases, these three facets have been studied separately leading to many divergent theories. Additionally, various frameworks and models purporting to encourage the different socio-affective branches of development have been debated in curriculum theory, yet research is inconclusive on the effectiveness of these programs. Most often studied is the socio-affective domain, which includes development and regulation of emotions; empathy development; interpersonal relations and social behaviors; personal and gender identity construction; and moral development, thinking, and judgment. Examining development in these domains can provide insight into why some gifted and talented adolescents are not always successful in adulthood despite advanced IQ scores. Particularly whether emotional, social and moral capabilities of gifted and talented individuals are as advanced as their intellectual abilities and how these are related to each other. This mixed methods longitudinal study examined students in urban gifted and talented charter schools for (1) socio-affective development levels and (2) whether a particular environment encourages developmental growth. Research questions guiding the study: (1) How do academically and artistically gifted 10th and 11th grade students perform on psychological scales of social and emotional intelligence and moral judgment? Do they differ from the normative sample? Do gender differences exist among gifted students? (2) Do adolescents who attend distinctive gifted charter schools differ in developmental profiles? Students’ performances on psychometric instruments were compared over time and by program type. Assessing moral judgment (DIT-2) and socio-emotional intelligence (BarOn EQ-I: YV), participants took pre-, mid-, and post-tests during one academic school year. Quantitative differences in growth on these psychological scales (individuals and school-wide) were examined. If a school showed change, qualitative artifacts (culture, curricula, instructional methodology, stakeholder interviews) provided insight for environmental correlation.

Keywords: gifted and talented programs, moral judgment, social and emotional intelligence, socio-affective education

Procedia PDF Downloads 195
13 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 78
12 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 75