Search results for: conceptual domain of law
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2906

Search results for: conceptual domain of law

26 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution

Authors: Abderrazak Bannari

Abstract:

Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.

Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing

Procedia PDF Downloads 228
25 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum

Authors: Divya Palaniappan

Abstract:

Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.

Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum

Procedia PDF Downloads 362
24 Empowering Women Entrepreneurs in Rural India through Developing Online Communities of Purpose Using Social Technologies

Authors: Jayanta Basak, Somprakash Bandyopadhyay, Parama Bhaumik, Siuli Roy

Abstract:

To solve the life and livelihood related problems of socially and economically backward rural women in India, several Women Self-Help Groups (WSHG) are formed in Indian villages. WSHGs are micro-communities (with 10-to 15 members) within a village community. WSHGs have been conceived not just to promote savings and provide credit, but also to act as a vehicle of change through the creation of women micro-entrepreneurs at the village level. However, in spite of huge investment and volume of people involved in the whole process, the success is still limited. Most of these entrepreneurial activities happen in small household workspaces where sales are limited to the inconsistent and unpredictable local markets. As a result, these entrepreneurs are perennially trapped in the vicious cycle of low risk taking ability, low investment capacity, low productivity, weak market linkages and low revenue. Market separation including customer-producer separation is one of the key problems in this domain. Researchers suggest that there are four types of market separation: (i) spatial, (ii) financial, (iii) temporal, and (iv) informational, which in turn impacts the nature of markets and marketing. In this context, a large group of intermediaries (the 'middleman') plays important role in effectively reducing the factors that separate markets by utilizing the resource of rural entrepreneurs, their products and thus, accelerate market development. The rural entrepreneurs are heavily dependent on these middlemen for marketing of their products and these middlemen exploit rural entrepreneurs by creating a huge informational separation between the rural producers and end-consumers in the market and thus hiding the profit margins. The objective of this study is to develop a transparent, online communities of purpose among rural and urban entrepreneurs using internet and web 2.0 technologies in order to decrease market separation and improve mutual awareness of available and potential products and market demands. Communities of purpose are groups of people who have an ability to influence, can share knowledge and learn from others, and be committed to achieving a common purpose. In this study, a cluster of SHG women located in a village 'Kandi' of West Bengal, India has been studied closely for six months. These women are primarily engaged in producing garments, soft toys, fabric painting on clothes, etc. These women were equipped with internet-enabled smart-phones where they can use chat applications in local language and common social networking websites like Facebook, Instagram, etc. A few handicraft experts and micro-entrepreneurs from the city (the 'seed') were included in their mobile messaging app group that enables the creation of a 'community of purpose' in order to share thoughts and ideas on product designs, market trends, and practices, and thus decrease the rural-urban market separation. After six months of regular group interaction in mobile messaging app among these rural-urban community members, it is observed that SHG women are empowered now to share their product images, design ideas, showcase, and promote their products in global marketplace using some common social networking websites through which they can also enhance and augment their community of purpose.

Keywords: communities of purpose, market separation, self-help group, social technologies

Procedia PDF Downloads 255
23 Chain Networks on Internationalization of SMEs: Co-Opetition Strategies in Agrifood Sector

Authors: Emilio Galdeano-Gómez, Juan C. Pérez-Mesa, Laura Piedra-Muñoz, María C. García-Barranco, Jesús Hernández-Rubio

Abstract:

The situation in which firms engage in simultaneous cooperation and competition with each other is a phenomenon known as co-opetition. This scenario has received increasing attention in business economics and management analyses. In the domain of supply chain networks and for small and medium-sized enterprises, SMEs, these strategies are of greater relevance given the complex environment of globalization and competition in open markets. These firms face greater challenges regarding technology and access to specific resources due to their limited capabilities and limited market presence. Consequently, alliances and collaborations with both buyers and suppliers prove to be key elements in overcoming these constraints. However, rivalry and competition are also regarded as major factors in successful internationalization processes, as they are drivers for firms to attain a greater degree of specialization and to improve efficiency, for example enabling them to allocate scarce resources optimally and providing incentives for innovation and entrepreneurship. The present work aims to contribute to the literature on SMEs’ internationalization strategies. The sample is constituted by a panel data of marketing firms from the Andalusian food sector and a multivariate regression analysis is developed, measuring variables of co-opetition and international activity. The hierarchical regression equations method has been followed, thus resulting in three estimated models: the first one excluding the variables indicative of channel type, while the latter two include the international retailer chain and wholesaler variable. The findings show that the combination of several factors leads to a complex scenario of inter-organizational relationships of cooperation and competition. In supply chain management analyses, these relationships tend to be classified as either buyer-supplier (vertical level) or supplier-supplier relationships (horizontal level). Several buyers and suppliers tend to participate in supply chain networks, and in which the form of governance (hierarchical and non-hierarchical) influences cooperation and competition strategies. For instance, due to their market power and/or their closeness to the end consumer, some buyers (e.g. large retailers in food markets) can exert an influence on the selection and interaction of several of their intermediate suppliers, thus endowing certain networks in the supply chain with greater stability. This hierarchical influence may in turn allow these suppliers to develop their capabilities (e.g. specialization) to a greater extent. On the other hand, for those suppliers that are outside these networks, this environment of hierarchy, characterized by a “hub firm” or “channel master”, may provide an incentive for developing their co-opetition relationships. These results prove that the analyzed firms have experienced considerable growth in sales to new foreign markets, mainly in Europe, dealing with large retail chains and wholesalers as main buyers. This supply industry is predominantly made up of numerous SMEs, which has implied a certain disadvantage when dealing with the buyers, as negotiations have traditionally been held on an individual basis and in the face of high competition among suppliers. Over recent years, however, cooperation among these marketing firms has become more common, for example regarding R&D, promotion, scheduling of production and sales.

Keywords: co-petition networks, international supply chain, maketing agrifood firms, SMEs strategies

Procedia PDF Downloads 79
22 Facilitating Primary Care Practitioners to Improve Outcomes for People With Oropharyngeal Dysphagia Living in the Community: An Ongoing Realist Review

Authors: Caroline Smith, Professor Debi Bhattacharya, Sion Scott

Abstract:

Introduction: Oropharyngeal Dysphagia (OD) effects around 15% of older people, however it is often unrecognised and under diagnosed until they are hospitalised. There is a need for primary care healthcare practitioners (HCPs) to assume a proactive role in identifying and managing OD to prevent adverse outcomes such as aspiration pneumonia. Understanding the determinants of primary care HCPs undertaking this new behaviour provides the intervention targets for addressing. This realist review, underpinned by the Theoretical Domains Framework (TDF), aims to synthesise relevant literature and develop programme theories to understand what interventions work, how they work and under what circumstances to facilitate HCPs to prevent harm from OD. Combining realist methodology with behavioural science will permit conceptualisation of intervention components as theoretical behavioural constructs, thus informing the design of a future behaviour change intervention. Furthermore, through the TDF’s linkage to a taxonomy of behaviour change techniques, we will identify corresponding behaviour change techniques to include in this intervention. Methods & analysis: We are following the five steps for undertaking a realist review: 1) clarify the scope 2) Literature search 3) appraise and extract data 4) evidence synthesis 5) evaluation. We have searched Medline, Google scholar, PubMed, EMBASE, CINAHL, AMED, Scopus and PsycINFO databases. We are obtaining additional evidence through grey literature, snowball sampling, lateral searching and consulting the stakeholder group. Literature is being screened, evaluated and synthesised in Excel and Nvivo. We will appraise evidence in relation to its relevance and rigour. Data will be extracted and synthesised according to its relation to Initial programme theories (IPTs). IPTs were constructed after the preliminary literature search, informed by the TDF and with input from a stakeholder group of patient and public involvement advisors, general practitioners, speech and language therapists, geriatricians and pharmacists. We will follow the Realist and Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) quality and publication standards to report study results. Results: In this ongoing review our search has identified 1417 manuscripts with approximately 20% progressing to full text screening. We inductively generated 10 IPTs that hypothesise practitioners require: the knowledge to spot the signs and symptoms of OD; the skills to provide initial advice and support; and access to resources in their working environment to support them conducting these new behaviours. We mapped the 10 IPTs to 8 TDF domains and then generated a further 12 IPTs deductively using domain definitions to fulfil the remaining 6 TDF domains. Deductively generated IPTs broadened our thinking to consider domains such as ‘Emotion,’ ‘Optimism’ and ‘Social Influence’, e.g. If practitioners perceive that patients, carers and relatives expect initial advice and support, then they will be more likely to provide this, because they will feel obligated to do so. After prioritisation with stakeholders using a modified nominal group technique approach, a maximum of 10 IPTs will progress to test against the literature.

Keywords: behaviour change, deglutition disorders, primary healthcare, realist review

Procedia PDF Downloads 85
21 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 91
20 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator

Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra

Abstract:

We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.

Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics

Procedia PDF Downloads 143
19 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program

Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner

Abstract:

Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY

Keywords: accessibility, COVID-19, recovery, testing

Procedia PDF Downloads 193
18 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 133
17 Experimental Characterisation of Composite Panels for Railway Flooring

Authors: F. Pedro, S. Dias, A. Tadeu, J. António, Ó. López, A. Coelho

Abstract:

Railway transportation is considered the most economical and sustainable way to travel. However, future mobility brings important challenges to railway operators. The main target is to develop solutions that stimulate sustainable mobility. The research and innovation goals for this domain are efficient solutions, ensuring an increased level of safety and reliability, improved resource efficiency, high availability of the means (train), and satisfied passengers with the travel comfort level. These requirements are in line with the European Strategic Agenda for the 2020 rail sector, promoted by the European Rail Research Advisory Council (ERRAC). All these aspects involve redesigning current equipment and, in particular, the interior of the carriages. Recent studies have shown that two of the most important requirements for passengers are reasonable ticket prices and comfortable interiors. Passengers tend to use their travel time to rest or to work, so train interiors and their systems need to incorporate features that meet these requirements. Among the various systems that integrate train interiors, the flooring system is one of the systems with the greatest impact on passenger safety and comfort. It is also one of the systems that takes more time to install on the train, and which contributes seriously to the weight (mass) of all interior systems. Additionally, it presents a strong impact on manufacturing costs. The design of railway floor, in the development phase, is usually made relying on a design software that allows to draw and calculate several solutions in a short period of time. After obtaining the best solution, considering the goals previously defined, experimental data is always necessary and required. This experimental phase has such great significance, that its outcome can provoke the revision of the designed solution. This paper presents the methodology and some of the results of an experimental characterisation of composite panels for railway application. The mechanical tests were made for unaged specimens and for specimens that suffered some type of aging, i.e. heat, cold and humidity cycles or freezing/thawing cycles. These conditionings aim to simulate not only the time effect, but also the impact of severe environmental conditions. Both full solutions and separated components/materials were tested. For the full solution, (panel) these were: four-point bending tests, tensile shear strength, tensile strength perpendicular to the plane, determination of the spreading of water, and impact tests. For individual characterisation of the components, more specifically for the covering, the following tests were made: determination of the tensile stress-strain properties, determination of flexibility, determination of tear strength, peel test, tensile shear strength test, adhesion resistance test and dimensional stability. The main conclusions were that experimental characterisation brings a huge contribution to understand the behaviour of the materials both individually and assembled. This knowledge contributes to the increase the quality and improvements of premium solutions. This research work was framed within the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through the COMPETE 2020.

Keywords: durability, experimental characterization, mechanical tests, railway flooring system

Procedia PDF Downloads 155
16 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 33
15 Effects of School Culture and Curriculum on Gifted Adolescent Moral, Social, and Emotional Development: A Longitudinal Study of Urban Charter Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Pat J. Austin, Marc P. Bonis, Richard B. Speaker, Jr.

Abstract:

Using two psychometric instruments, this study examined social and emotional intelligence and moral judgment levels of more than 300 gifted and talented high school students enrolled in arts-integrated, academic acceleration, and creative arts charter schools in an ethnically diverse large city in the southeastern United States. Gifted and talented individuals possess distinguishable characteristics; these frequently appear as strengths, but often serious problems accompany them. Although many gifted adolescents thrive in their environments, some struggle in their school and community due to emotional intensity, motivation and achievement issues, lack of peers and isolation, identification problems, sensitivity to expectations and feelings, perfectionism, and other difficulties. These gifted students endure and survive in school rather than flourish. Gifted adolescents face special intrapersonal, interpersonal, and environmental problems. Furthermore, they experience greater levels of stress, disaffection, and isolation than non-gifted individuals due to their advanced cognitive abilities. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of these adolescents. Numerous studies have researched moral, social, and emotional development in the areas of cognitive-developmental, psychoanalytic, and behavioral learning; however, in almost all cases, these three facets have been studied separately leading to many divergent theories. Additionally, various frameworks and models purporting to encourage the different socio-affective branches of development have been debated in curriculum theory, yet research is inconclusive on the effectiveness of these programs. Most often studied is the socio-affective domain, which includes development and regulation of emotions; empathy development; interpersonal relations and social behaviors; personal and gender identity construction; and moral development, thinking, and judgment. Examining development in these domains can provide insight into why some gifted and talented adolescents are not always successful in adulthood despite advanced IQ scores. Particularly whether emotional, social and moral capabilities of gifted and talented individuals are as advanced as their intellectual abilities and how these are related to each other. This mixed methods longitudinal study examined students in urban gifted and talented charter schools for (1) socio-affective development levels and (2) whether a particular environment encourages developmental growth. Research questions guiding the study: (1) How do academically and artistically gifted 10th and 11th grade students perform on psychological scales of social and emotional intelligence and moral judgment? Do they differ from the normative sample? Do gender differences exist among gifted students? (2) Do adolescents who attend distinctive gifted charter schools differ in developmental profiles? Students’ performances on psychometric instruments were compared over time and by program type. Assessing moral judgment (DIT-2) and socio-emotional intelligence (BarOn EQ-I: YV), participants took pre-, mid-, and post-tests during one academic school year. Quantitative differences in growth on these psychological scales (individuals and school-wide) were examined. If a school showed change, qualitative artifacts (culture, curricula, instructional methodology, stakeholder interviews) provided insight for environmental correlation.

Keywords: gifted and talented programs, moral judgment, social and emotional intelligence, socio-affective education

Procedia PDF Downloads 192
14 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 76
13 Examining Language as a Crucial Factor in Determining Academic Performance: A Case of Business Education in Hong Kong

Authors: Chau So Ling

Abstract:

I.INTRODUCTION: Educators have always been interested in exploring factors that contribute to students’ academic success. It is beyond question that language, as a medium of instruction, will affect student learning. This paper tries to investigate whether language is a crucial factor in determining students’ achievement in their studies. II. BACKGROUND AND SIGNIFICANCE OF STUDY: The issue of using English as a medium of instruction in Hong Kong is a special topic because Hong Kong is a post-colonial and international city which a British colony. In such a specific language environment, researchers in the education field have always been interested in investigating students’ language proficiency and its relation to academic achievement and other related educational indicators such as motivation to learn, self-esteem, learning effectiveness, self-efficacy, etc. Along this line of thought, this study specifically focused on business education. III. METHODOLOGY: The methodology in this study involved two sequential stages, namely, a focus group interview and a data analysis. The whole study was directed towards both qualitative and quantitative aspects. The subjects of the study were divided into two groups. For the first group participating in the interview, a total of ten high school students were invited. They studied Business Studies, and their English standard was varied. The theme of the discussion was “Does English affect your learning and examination results of Business Studies?” The students were facilitated to discuss the extent to which English standard affected their learning of Business subjects and requested to rate the correlation between English and performance of Business Studies on a five-point scale. The second stage of the study involved another group of students. They were high school graduates who had taken the public examination for entering universities. A database containing their public examination results for different subjects has been obtained for the purpose of statistical analysis. Hypotheses were tested and evidence was obtained from the focus group interview to triangulate the findings. V. MAJOR FINDINGS AND CONCLUSION: By sharing of personal experience, the discussion of focus group interviews indicated that higher English standards could help the students achieve better learning and examination performance. In order to end the interview, the students were asked to indicate the correlation between English proficiency and performance of Business Studies on a five-point scale. With point one meant least correlated, ninety percent of the students gave point four for the correlation. The preliminary results illustrated that English plays an important role in students’ learning of Business Studies, or at least this was what the students perceived, which set the hypotheses for the study. After conducting the focus group interview, further evidence had to be gathered to support the hypotheses. The data analysis part tried to find out the relationship by correlating the students’ public examination results of Business Studies and levels of English standard. The results indicated a positive correlation between their English standard and Business Studies examination performance. In order to highlight the importance of the English language to the study of Business Studies, the correlation between the public examination results of other non-business subjects was also tested. Statistical results showed that language does play a role in affecting students’ performance in studying Business subjects than the other subjects. The explanation includes the dynamic subject nature, examination format and study requirements, the specialist language used, etc. Unlike Science and Geography, students in their learning process might find it more difficult to relate business concepts or terminologies to their own experience, and there are not many obvious physical or practical activities or visual aids to serve as evidence or experiments. It is well-researched in Hong Kong that English proficiency is a determinant of academic success. Other research studies verified such a notion. For example, research revealed that the more enriched the language experience, the better the cognitive performance in conceptual tasks. The ability to perform this kind of task is particularly important to students taking Business subjects. Another research was carried out in the UK, which was geared towards identifying and analyzing the reasons for underachievement across a cohort of GCSE students taking Business Studies. Results showed that weak language ability was the main barrier to raising students’ performance levels. It seemed that the interview result was successfully triangulated with data findings. Although education failure cannot be restricted to linguistic failure and language is just one of the variables to play in determining academic achievement, it is generally accepted that language does affect students’ academic performance. It is just a matter of extent. This paper provides recommendations for business educators on students’ language training and sheds light on more research possibilities in this area.

Keywords: academic performance, language, learning, medium of instruction

Procedia PDF Downloads 121
12 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 73
11 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
10 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts

Authors: William Michael Short

Abstract:

Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.

Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics

Procedia PDF Downloads 132
9 Seismic Stratigraphy of the First Deposits of the Kribi-Campo Offshore Sub-basin (Gulf of Guinea): Pre-cretaceous Early Marine Incursion and Source Rocks Modeling

Authors: Mike-Franck Mienlam Essi, Joseph Quentin Yene Atangana, Mbida Yem

Abstract:

The Kribi-Campo sub-basin belongs to the southern domain of the Cameroon Atlantic Margin in the Gulf of Guinea. It is the African homologous segment of the Sergipe-Alagoas Basin, located at the northeast side of the Brazil margin. The onset of the seafloor spreading period in the Southwest African Margin in general and the study area particularly remains controversial. Various studies locate this event during the Cretaceous times (Early Aptian to Late Albian), while others suggested that this event occurred during Pre-Cretaceous period (Palaeozoic or Jurassic). This work analyses 02 Cameroon Span seismic lines to re-examine the Early marine incursion period of the study area for a better understanding of the margin evolution. The methodology of analysis in this study is based on the delineation of the first seismic sequence, using the reflector’s terminations tracking and the analysis of its internal reflections associated to the external configuration of the package. The results obtained indicate from the bottom upwards that the first deposits overlie a first seismic horizon (H1) associated to “onlap” terminations at its top and underlie a second horizon which shows “Downlap” terminations at its top (H2). The external configuration of this package features a prograded fill pattern, and it is observed within the depocenter area with discontinuous reflections that pinch out against the basement. From east to west, this sequence shows two seismic facies (SF1 and SF2). SF1 has parallel to subparallel reflections, characterized by high amplitude, and SF2 shows parallel and stratified reflections, characterized by low amplitude. The distribution of these seismic facies reveals a lateral facies variation observed. According to the fundamentals works on seismic stratigraphy and the literature review of the geological context of the study area, particularly, the stratigraphical natures of the identified horizons and seismic facies have been highlighted. The seismic horizons H1 and H2 correspond to Top basement and “Downlap Surface,” respectively. SF1 indicates continental sediments (Sands/Sandstone) and SF2 marine deposits (shales, clays). Then, the prograding configuration observed suggests a marine regression. The correlation of these results with the lithochronostratigraphic chart of Sergipe-Alagoas Basin reveals that the first marine deposits through the study area are dated from Pre-Cretaceous times (Palaeozoic or Jurassic). The first deposits onto the basement represents the end of a cycle of sedimentation. The hypothesis of Mike.F. Mienlam Essi is with the Earth Sciences Department of the Faculty of Science of the University of Yaoundé I, P.O. BOX 812 CAMEROON (e-mail: [email protected]). Joseph.Q. Yene Atangana is with the Earth Sciences Department of the Faculty of Science of the University of Yaoundé I, P.O. BOX 812 CAMEROON (e-mail: [email protected]). Mbida Yem is with the Earth Sciences Department of the Faculty of Science of the University of Yaoundé I, P.O. BOX 812 CAMEROON (e-mail: [email protected]). Cretaceous seafloor spreading through the study area is the onset of another cycle of sedimentation. Furthermore, the presence of marine sediments into the first deposits implies that this package could contain marine source rocks. The spatial tracking of these deposits reveals that they could be found in some onshore parts of the Kribi-Campo area or even in the northern side.

Keywords: cameroon span seismic, early marine incursion, kribi-campo sub-basin, pre-cretaceous period, sergipe-alagoas basin

Procedia PDF Downloads 107
8 Amphiphilic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Algae

Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres

Abstract:

Biofilm is a predominant lifestyle chosen by bacteria. Whether it is developed on an immerged surface or a mobile biofilm known as flocs, the bacteria within this form of life show properties different from its planktonic ones. Within the biofilm, the self-formed matrix of Extracellular Polymeric Substances (EPS) offers hydration, resources capture, enhanced resistance to antimicrobial agents, and allows cell-communication. Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint6 (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation7, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids9 to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge (BSV36, KLN47) or a zwitterionic polar-head group (SL386, MB2871) to prevent microfouling with marine bacteria. We also study the toxicity of these compounds in order to identify the most promising compound that must feature high anti-adhesive properties and a low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.

Keywords: amphiphilic phospholipids, bacterial biofilm, marine microfouling, non-toxic antifouling

Procedia PDF Downloads 147
7 Harnessing the Power of Artificial Intelligence: Advancements and Ethical Considerations in Psychological and Behavioral Sciences

Authors: Nayer Mofidtabatabaei

Abstract:

Advancements in artificial intelligence (AI) have transformed various fields, including psychology and behavioral sciences. This paper explores the diverse ways in which AI is applied to enhance research, diagnosis, therapy, and understanding of human behavior and mental health. We discuss the potential benefits and challenges associated with AI in these fields, emphasizing the ethical considerations and the need for collaboration between AI researchers and psychological and behavioral science experts. Artificial Intelligence (AI) has gained prominence in recent years, revolutionizing multiple industries, including healthcare, finance, and entertainment. One area where AI holds significant promise is the field of psychology and behavioral sciences. AI applications in this domain range from improving the accuracy of diagnosis and treatment to understanding complex human behavior patterns. This paper aims to provide an overview of the various AI applications in psychological and behavioral sciences, highlighting their potential impact, challenges, and ethical considerations. Mental Health Diagnosis AI-driven tools, such as natural language processing and sentiment analysis, can analyze large datasets of text and speech to detect signs of mental health issues. For example, chatbots and virtual therapists can provide initial assessments and support to individuals suffering from anxiety or depression. Autism Spectrum Disorder (ASD) Diagnosis AI algorithms can assist in early ASD diagnosis by analyzing video and audio recordings of children's behavior. These tools help identify subtle behavioral markers, enabling earlier intervention and treatment. Personalized Therapy AI-based therapy platforms use personalized algorithms to adapt therapeutic interventions based on an individual's progress and needs. These platforms can provide continuous support and resources for patients, making therapy more accessible and effective. Virtual Reality Therapy Virtual reality (VR) combined with AI can create immersive therapeutic environments for treating phobias, PTSD, and social anxiety. AI algorithms can adapt VR scenarios in real-time to suit the patient's progress and comfort level. Data Analysis AI aids researchers in processing vast amounts of data, including survey responses, brain imaging, and genetic information. Privacy Concerns Collecting and analyzing personal data for AI applications in psychology and behavioral sciences raise significant privacy concerns. Researchers must ensure the ethical use and protection of sensitive information. Bias and Fairness AI algorithms can inherit biases present in training data, potentially leading to biased assessments or recommendations. Efforts to mitigate bias and ensure fairness in AI applications are crucial. Transparency and Accountability AI-driven decisions in psychology and behavioral sciences should be transparent and subject to accountability. Patients and practitioners should understand how AI algorithms operate and make decisions. AI applications in psychological and behavioral sciences have the potential to transform the field by enhancing diagnosis, therapy, and research. However, these advancements come with ethical challenges that require careful consideration. Collaboration between AI researchers and psychological and behavioral science experts is essential to harness AI's full potential while upholding ethical standards and privacy protections. The future of AI in psychology and behavioral sciences holds great promise, but it must be navigated with caution and responsibility.

Keywords: artificial intelligence, psychological sciences, behavioral sciences, diagnosis and therapy, ethical considerations

Procedia PDF Downloads 70
6 Results concerning the University: Industry Partnership for a Research Project Implementation (MUROS) in the Romanian Program Star

Authors: Loretta Ichim, Dan Popescu, Grigore Stamatescu

Abstract:

The paper reports the collaboration between a top university from Romania and three companies for the implementation of a research project in a multidisciplinary domain, focusing on the impact and benefits both for the education and industry. The joint activities were developed under the Space Technology and Advanced Research Program (STAR), funded by the Romanian Space Agency (ROSA) for a university-industry partnership. The context was defined by linking the European Space Agency optional programs, with the development and promotion national research, with the educational and industrial capabilities in the aeronautics, security and related areas by increasing the collaboration between academic and industrial entities as well as by realizing high-level scientific production. The project name is Multisensory Robotic System for Aerial Monitoring of Critical Infrastructure Systems (MUROS), which was carried 2013-2016. The project included the University POLITEHNICA of Bucharest (coordinator) and three companies, which manufacture and market unmanned aerial systems. The project had as main objective the development of an integrated system for combined ground wireless sensor networks and UAV monitoring in various application scenarios for critical infrastructure surveillance. This included specific activities related to fundamental and applied research, technology transfer, prototype implementation and result dissemination. The core area of the contributions laid in distributed data processing and communication mechanisms, advanced image processing and embedded system development. Special focus is given by the paper to analyzing the impact the project implementation in the educational process, directly or indirectly, through the faculty members (professors and students) involved in the research team. Three main directions are discussed: a) enabling students to carry out internships at the partner companies, b) handling advanced topics and industry requirements at the master's level, c) experiments and concept validation for doctoral thesis. The impact of the research work (as the educational component) developed by the faculty members on the increasing performances of the companies’ products is highlighted. The collaboration between university and companies was well balanced both for contributions and results. The paper also presents the outcomes of the project which reveals the efficient collaboration between high education and industry: master thesis, doctoral thesis, conference papers, journal papers, technical documentation for technology transfer, prototype, and patent. The experience can provide useful practices of blending research and education within an academia-industry cooperation framework while the lessons learned represent a starting point in debating the new role of advanced research and development performing companies in association with higher education. This partnership, promoted at UE level, has a broad impact beyond the constrained scope of a single project and can develop into long-lasting collaboration while benefiting all stakeholders: students, universities and the surrounding knowledge-based economic and industrial ecosystem. Due to the exchange of experiences between the university (UPB) and the manufacturing company (AFT Design), a new project, SIMUL, under the Bridge Grant Program (Romanian executive agency UEFISCDI) was started (2016 – 2017). This project will continue the educational research for innovation on master and doctoral studies in MUROS thematic (collaborative multi-UAV application for flood detection).

Keywords: education process, multisensory robotic system, research and innovation project, technology transfer, university-industry partnership

Procedia PDF Downloads 239
5 A Study on the Use Intention of Smart Phone

Authors: Zhi-Zhong Chen, Jun-Hao Lu, Jr., Shih-Ying Chueh

Abstract:

Based on Unified Theory of Acceptance and Use of Technology (UTAUT), the study investigates people’s intention on using smart phones. The study additionally incorporates two new variables: 'self-efficacy' and 'attitude toward using'. Samples are collected by questionnaire survey, in which 240 are valid. After Correlation Analysis, Reliability Test, ANOVA, t-test and Multiple Regression Analysis, the study finds that social impact and self-efficacy have positive effect on use intentions, and the use intentions also have positive effect on use behavior.

Keywords: [1] Ajzen & Fishbein (1975), “Belief, attitude, intention and behavior: An introduction to theory and research”, Reading MA: Addison-Wesley. [2] Bandura (1977) Self-efficacy: toward a unifying theory of behavioural change. Psychological Review , 84, 191–215. [3] Bandura( 1986) A. Bandura, Social foundations of though and action, Prentice-Hall. Englewood Cliffs. [4] Ching-Hui Huang (2005). The effect of Regular Exercise on Elderly Optimism: The Self-efficacy and Theory of Reasoned Action Perspectives.(Master's dissertation, National Taiwan Sport University, 2005).National Digital Library of Theses and Dissertations in Taiwan。 [5] Chun-Mo Wu (2007).The Effects of Perceived Risk and Service Quality on Purchase Intention - an Example of Taipei City Long-Term Care Facilities. (Master's dissertation, Ming Chuan University, 2007).National Digital Library of Theses and Dissertations in Taiwan. [6] Compeau, D.R., and Higgins, C.A., (1995) “Application of social cognitive theory to training for computer skills.”, Information Systems Research, 6(2), pp.118-143. [7] computer-self-efficacy and mediators of the efficacy-performance relationship. International Journal of Human-Computer Studies, 62, 737-758. [8] Davis et al(1989), “User acceptance of computer technology: A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [9] Davis et al(1989), “User acceptance of computer technology:A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [10] Davis, F.D. (1989). Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-340。 [11] Davis. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319–340. doi:10.2307/249008 [12] Johnson, R. D. (2005). An empirical investigation of sources of application-specific [13] Mei-yin Hsu (2010).The Study on Attitude and Satisfaction of Electronic Documents System for Administrators of Elementary Schools in Changhua County.(Master's dissertation , Feng Chia University, 2010).National Digital Library of Theses and Dissertations in Taiwan. [14] Ming-Chun Hsieh (2010). Research on Parents’ Attitudes Toward Electronic Toys: The case of Taichung City.(Master's dissertation, Chaoyang University of Technology,2010).National Digital Library of Theses and Dissertations in Taiwan. [15] Moon and Kim(2001). Extending the TAM for a World-Wide-Web context, Information and Management, v.38 n.4, p.217-230. [16] Shang-Yi Hu (2010).The Impacts of Knowledge Management on Customer Relationship Management – Enterprise Characteristicsand Corporate Governance as a Moderator.(Master's dissertation, Leader University, 2010)。National Digital Library of Theses and Dissertations in Taiwan. [17] Sheng-Yi Hung (2013, September10).Worldwide sale of smartphones to hit one billion IDC:Android dominate the market. ETtoday. Retrieved data form the available protocol:2013/10/3. [18] Thompson, R.L., Higgins, C.A., and Howell, J.M.(1991), “Personal Computing: Toward a Conceptual Model of Utilization”, MIS Quarterly(15:1), pp. 125-143. [19] Venkatesh, V., M.G. Morris, G.B. Davis, and F. D. Davis (2003), “User acceptance of information technology: Toward a unified view, ” MIS Quarterly, 27, No. 3, pp.425-478. [20] Vijayasarathy, L. R. (2004), Predicting Consumer Intentions to Use On-Line Shopping: The Case for an Augmented Technology Acceptance Model, Information and Management, Vol.41, No.6, pp.747-762. [21] Wikipedia - smartphone (http://zh.wikipedia.org/zh-tw/%E6%99%BA%E8%83%BD%E6%89%8B%E6%9C%BA)。 [22] Wu-Minsan (2008).The impacts of self-efficacy, social support on work adjustment with hearing impaired. (Master's dissertation, Southern Taiwan University of Science and Technology, 2008).National Digital Library of Theses and Dissertations in Taiwan. [23] Yu-min Lin (2006). The Influence of Business Employee’s MSN Self-efficacy On Instant Messaging Usage Behavior and Communicaiton Satisfaction.(Master's dissertation, National Taiwan University of Science and Technology, 2006).National Digital Library of Theses and Dissertations in Taiwan.

Procedia PDF Downloads 410
4 Moths of Indian Himalayas: Data Digging for Climate Change Monitoring

Authors: Angshuman Raha, Abesh Kumar Sanyal, Uttaran Bandyopadhyay, Kaushik Mallick, Kamalika Bhattacharyya, Subrata Gayen, Gaurab Nandi Das, Mohd. Ali, Kailash Chandra

Abstract:

Indian Himalayan Region (IHR), due to its sheer latitudinal and altitudinal expanse, acts as a mixing ground for different zoogeographic faunal elements. The innumerable unique and distributional restricted rare species of IHR are constantly being threatened with extinction by the ongoing climate change scenario. Many of which might have faced extinction without even being noticed or discovered. Monitoring the community dynamics of a suitable taxon is indispensable to assess the effect of this global perturbation at micro-habitat level. Lepidoptera, particularly moths are suitable for this purpose due to their huge diversity and strict herbivorous nature. The present study aimed to collate scattered historical records of moths from IHR and spatially disseminate the same in Geographic Information System (GIS) domain. The study also intended to identify moth species with significant altitudinal shifts which could be prioritised for monitoring programme to assess the effect of climate change on biodiversity. A robust database on moths recorded from IHR was prepared from voluminous secondary literature and museum collections. Historical sampling points were transformed into richness grids which were spatially overlaid on altitude, annual precipitation and vegetation layers separately to show moth richness patterns along major environmental gradients. Primary samplings were done by setting standard light traps at 11 Protected Areas representing five Indian Himalayan biogeographic provinces. To identify significant altitudinal shifts, past and present altitudinal records of the identified species from primary samplings were compared. A consolidated list of 4107 species belonging to 1726 genera of 62 families of moths was prepared from a total of 10,685 historical records from IHR. Family-wise assemblage revealed Erebidae to be the most speciose family with 913 species under 348 genera, followed by Geometridae with 879 species under 309 genera and Noctuidae with 525 species under 207 genera. Among biogeographic provinces, Central Himalaya represented maximum records with 2248 species, followed by Western and North-western Himalaya with 1799 and 877 species, respectively. Spatial analysis revealed species richness was more or less uniform (up to 150 species record per cell) across IHR. Throughout IHR, the middle elevation zones between 1000-2000m encompassed high species richness. Temperate coniferous forest associated with 1500-2000mm rainfall zone showed maximum species richness. Total 752 species of moths were identified representing 23 families from the present sampling. 13 genera were identified which were restricted to specialized habitats of alpine meadows over 3500m. Five historical localities with high richness of >150 species were selected which could be considered for repeat sampling to assess climate change influence on moth assemblage. Of the 7 species exhibiting significant altitudinal ascend of >2000m, Trachea auriplena, Diphtherocome fasciata (Noctuidae) and Actias winbrechlini (Saturniidae) showed maximum range shift of >2500m, indicating intensive monitoring of these species. Great Himalayan National Park harbours most diverse assemblage of high-altitude restricted species and should be a priority site for habitat conservation. Among the 13 range restricted genera, Arichanna, Opisthograptis, Photoscotosia (Geometridae), Phlogophora, Anaplectoides and Paraxestia (Noctuidae) were dominant and require rigorous monitoring, as they are most susceptible to climatic perturbations.

Keywords: altitudinal shifts, climate change, historical records, Indian Himalayan region, Lepidoptera

Procedia PDF Downloads 169
3 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District

Authors: Veenapani Rajeev Verma

Abstract:

Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.

Keywords: effective coverage, principal component analysis, readiness index, universal health coverage

Procedia PDF Downloads 121
2 MANIFEST-2, a Global, Phase 3, Randomized, Double-Blind, Active-Control Study of Pelabresib (CPI-0610) and Ruxolitinib vs. Placebo and Ruxolitinib in JAK Inhibitor-Naïve Myelofibrosis Patients

Authors: Claire Harrison, Raajit K. Rampal, Vikas Gupta, Srdan Verstovsek, Moshe Talpaz, Jean-Jacques Kiladjian, Ruben Mesa, Andrew Kuykendall, Alessandro Vannucchi, Francesca Palandri, Sebastian Grosicki, Timothy Devos, Eric Jourdan, Marielle J. Wondergem, Haifa Kathrin Al-Ali, Veronika Buxhofer-Ausch, Alberto Alvarez-Larrán, Sanjay Akhani, Rafael Muñoz-Carerras, Yury Sheykin, Gozde Colak, Morgan Harris, John Mascarenhas

Abstract:

Myelofibrosis (MF) is characterized by bone marrow fibrosis, anemia, splenomegaly and constitutional symptoms. Progressive bone marrow fibrosis results from aberrant megakaryopoeisis and expression of proinflammatory cytokines, both of which are heavily influenced by bromodomain and extraterminal domain (BET)-mediated gene regulation and lead to myeloproliferation and cytopenias. Pelabresib (CPI-0610) is an oral small-molecule investigational inhibitor of BET protein bromodomains currently being developed for the treatment of patients with MF. It is designed to downregulate BET target genes and modify nuclear factor kappa B (NF-κB) signaling. MANIFEST-2 was initiated based on data from Arm 3 of the ongoing Phase 2 MANIFEST study (NCT02158858), which is evaluating the combination of pelabresib and ruxolitinib in Janus kinase inhibitor (JAKi) treatment-naïve patients with MF. Primary endpoint analyses showed splenic and symptom responses in 68% and 56% of 84 enrolled patients, respectively. MANIFEST-2 (NCT04603495) is a global, Phase 3, randomized, double-blind, active-control study of pelabresib and ruxolitinib versus placebo and ruxolitinib in JAKi treatment-naïve patients with primary MF, post-polycythemia vera MF or post-essential thrombocythemia MF. The aim of this study is to evaluate the efficacy and safety of pelabresib in combination with ruxolitinib. Here we report updates from a recent protocol amendment. The MANIFEST-2 study schema is shown in Figure 1. Key eligibility criteria include a Dynamic International Prognostic Scoring System (DIPSS) score of Intermediate-1 or higher, platelet count ≥100 × 10^9/L, spleen volume ≥450 cc by computerized tomography or magnetic resonance imaging, ≥2 symptoms with an average score ≥3 or a Total Symptom Score (TSS) of ≥10 using the Myelofibrosis Symptom Assessment Form v4.0, peripheral blast count <5% and Eastern Cooperative Oncology Group performance status ≤2. Patient randomization will be stratified by DIPSS risk category (Intermediate-1 vs Intermediate-2 vs High), platelet count (>200 × 10^9/L vs 100–200 × 10^9/L) and spleen volume (≥1800 cm^3 vs <1800 cm^3). Double-blind treatment (pelabresib or matching placebo) will be administered once daily for 14 consecutive days, followed by a 7 day break, which is considered one cycle of treatment. Ruxolitinib will be administered twice daily for all 21 days of the cycle. The primary endpoint is SVR35 response (≥35% reduction in spleen volume from baseline) at Week 24, and the key secondary endpoint is TSS50 response (≥50% reduction in TSS from baseline) at Week 24. Other secondary endpoints include safety, pharmacokinetics, changes in bone marrow fibrosis, duration of SVR35 response, duration of TSS50 response, progression-free survival, overall survival, conversion from transfusion dependence to independence and rate of red blood cell transfusion for the first 24 weeks. Study recruitment is ongoing; 400 patients (200 per arm) from North America, Europe, Asia and Australia will be enrolled. The study opened for enrollment in November 2020. MANIFEST-2 was initiated based on data from the ongoing Phase 2 MANIFEST study with the aim of assessing the efficacy and safety of pelabresib and ruxolitinib in JAKi treatment-naïve patients with MF. MANIFEST-2 is currently open for enrollment.

Keywords: CPI-0610, JAKi treatment-naïve, MANIFEST-2, myelofibrosis, pelabresib

Procedia PDF Downloads 201
1 Women in Malaysia: Exploring the Democratic Space in Politics

Authors: Garima Sarkar

Abstract:

The main purpose of the present paper is to investigate the development and progress achieved by women in the decision-making sphere and to access the level of their political-participation in Parliamentary Elections of Malaysia and their status in overall Malaysian political domain. The paper also focuses on the role and status of women in the major political parties of the state both the parties in power as well as the parties in opposition. The primary objective of the study is to focus on the major hindrances and social malpractices faced by women and also Muslim women’s access to justice in Malaysia. It also demonstrates the linkages between national policy initiatives and the advancement of women in various areas, such as economics, health, employment, politics, power-sharing, social development and law and most importantly evaluating their status in the dominant religion of the nation. In Malaysia, women’s political participation is being challenged from every nook and corner of the society. A high percentage of women are getting educated, forming a significant labor force in present day Malaysia, who can be employed in the manufacturing sector, retail trade, hotels and restaurant, agriculture etc. Women today consist of almost half of the population and exceed boys in the tertiary sector by a ratio of 80:20. Despite these achievements, however, women’s labor force engagement remains confined to ‘ traditional women’s occupations’, such as those of primary school teachers, data entry clerks and organizing polls during elections and motivating other less enlightened women to cast their votes. In the political arena, the past few General Elections of Malaysia clearly exhibited a slight change in the number of women Members of Parliament from 10.6% (20 out of 193 Parliamentary seats in 1999) to 10.5% (23 out of 219 Parliamentary seats in 2004). Amidst the political posturing for the recent General Election in 2013 of Malaysia, women’s political participation remains a prime concern in Malaysia. It is evident that while much of the attention of women revolves around charitable assistance, they are much less likely to be portrayed as active participants in electoral politics and governance. According to the electoral roll for the third quarter of 2012, 6,578,916 women are registered as voters. They represent 50.2% of the total number of the registered voters. However, this parity in terms of voter registration is not reflected in the number of elected representatives at the Parliamentary level. Only 10.4% of sitting Members of Parliament are women. The women’s participation in the legislature and executive branches are important since their presence brings the spotlight squarely on issues that have been historically neglected and overlooked. In the recent 2013 General Elections in Malaysia out of 35 full ministerial position only two, or 5.7% have been filled by women. In each of the 2009, 2010, and in the present 2013 Cabinet members, there have only been two women ministers, with this number reduced to one briefly when the Prime Minister appointed himself placeholder in the Ministry of Women, Family and Community Development. In the recent past, in its Election Manifesto, Barisan Nasional made a pledge of ‘increasing the number of women participating in national decision-making processes’. Even after such pledges, the Malaysian leadership has failed to mirror the strong presence of women in leadership positions of public life which primarily includes politics, the judiciary and in business. There has been a strong urge to political parties by various gender-sensitive groups to nominate more women as candidates for contesting elections at the Parliamentary as well as at the State level. The democratization process will never be truly democratic without a proper gender agenda and representation. Although Malaysia signed the Beijing Platform for Action document in 1995, the state has a long way to go in enhancing the participation of women in every segment of Malaysian political, economic and cultural. There has been a small percentage of women representation in decision-making bodies compared to the 30% targeted by the Beijing Platform for Action. Thus, democratization in terms of representation of women in leadership positions and decision-making positions or bodies is essential since it’s a move towards a qualitative transformation of women in shaping national decision-making processes. The democratization process has to ensure women’s full participation and their goals of development and their full participation has to be included in the process of formulating and shaping the developmental goals.

Keywords: women, gender equality, Islam, democratization, political representation, Parliament

Procedia PDF Downloads 261