Search results for: Schiff base complex
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7208

Search results for: Schiff base complex

818 The Role of Establishing Zakat-Based Finance in Alleviating Poverty in the Muslim World

Authors: Khan Md. Abdus Subhan, Rabeya Bushra

Abstract:

The management of Intellectual Property (IP) in museums can be complex and challenging, as it requires balancing access and control. On the one hand, museums must ensure that they have balanced permissions to display works in their collections and make them accessible to the public. On the other hand, they must also protect the rights of creators and owners of works and ensure that they are not infringing on IP rights. Intellectual property has become an increasingly important aspect of museum operations in the digital age. Museums hold a vast array of cultural assets in their collections, many of which have significant value as IP assets. The balanced management of IP in museums can help generate additional revenue and promote cultural heritage while also protecting the rights of the museum and its collections. Digital technologies have greatly impacted the way museums manage IP, providing new opportunities for revenue generation through e-commerce and licensing while also presenting new challenges related to IP protection and management. Museums must take a comprehensive approach to IP management, leveraging digital technologies, protecting IP rights, and engaging in licensing and e-commerce activities to maximize income and the economy of countries through the strong management of cultural institutions. Overall, the balanced management of IP in museums is crucial for ensuring the sustainability of museum operations and for preserving cultural heritage for future generations. By taking a balanced approach to identifying museum IP assets, museums can generate revenues and secure their financial sustainability to ensure the long-term preservation of their cultural heritage. We can divide IP assets in museums into two kinds: collection IP and museum-generated IP. Certain museums become confused and lose sight of their mission when trying to leverage collections-based IP. This was the case at the German State Museum in Berlin when the museum made 100 replicas from the Nefertiti bust and wrote under the replicas all rights reserved to the Berlin Museum and issued a certificate to prevent any person or Institution from reproducing any replica from this bust. The implications of IP in museums are far-reaching and can have significant impacts on the preservation of cultural heritage, the dissemination of information, and the development of educational programs. As such, it is important for museums to have a comprehensive understanding of IP laws and regulations and to properly manage IP to avoid legal liability, damage to reputation, and loss of revenue. The research aims to highlight the importance and role of intellectual property in museums and provide some illustrative examples of this.

Keywords: zakat, economic development, Muslim world, poverty alleviation.

Procedia PDF Downloads 46
817 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 74
816 The Impact of CSR Satisfaction on Employee Commitment

Authors: Silke Bustamante, Andrea Pelzeter, Andreas Deckmann, Rudi Ehlscheidt, Franziska Freudenberger

Abstract:

Many companies increasingly seek to enhance their attractiveness as an employer to bind their employees. At the same time, corporate responsibility for social and ecological issues seems to become a more important part of an attractive employer brand. It enables the company to match the values and expectations of its members, to signal fairness towards them and to increase its brand potential for positive psychological identification on the employees’ side. In the last decade, several empirical studies have focused this relationship, confirming a positive effect of employees’ CSR perception and their affective organizational commitment. The current paper aims to take a slightly different view by analyzing the impact of another factor on commitment: the weighted employee’s satisfaction with the employer CSR. For that purpose, it is assumed that commitment levels are rather a result of the fulfillment or disappointment of expectations. Hence, instead of merely asking how CSR perception affects commitment, a more complex independent variable is taken into account: a weighted satisfaction construct that summarizes two different factors. Therefore, the individual level of commitment contingent on CSR is conceptualized as a function of two psychological processes: (1) the individual significance that an employee ascribes to specific employer attributes and (2) the individual satisfaction based on the fulfillment of expectation that rely on preceding perceptions of employer attributes. The results presented are based on a quantitative survey that was undertaken among employees of the German service sector. Conceptually a five-dimensional CSR construct (ecology, employees, marketplace, society and corporate governance) and a two-dimensional non-CSR construct (company and workplace) were applied to differentiate employer characteristics. (1) Respondents were asked to indicate the importance of different facets of CSR-related and non-CSR-related employer attributes. By means of a conjoint analysis, the relative importance of each employer attribute was calculated from the data. (2) In addition to this, participants stated their level of satisfaction with specific employer attributes. Both indications were merged to individually weighted satisfaction indexes on the seven-dimensional levels of employer characteristics. The affective organizational commitment of employees (dependent variable) was gathered by applying the established 15-items Organizational Commitment Questionnaire (OCQ). The findings related to the relationship between satisfaction and commitment will be presented. Furthermore, the question will be addressed, how important satisfaction with CSR is in relation to the satisfaction with other attributes of the company in the creation of commitment. Practical as well as scientific implications will be discussed especially with reference to previous results that focused on CSR perception as a commitment driver.

Keywords: corporate social responsibility, organizational commitment, employee attitudes/satisfaction, employee expectations, employer brand

Procedia PDF Downloads 269
815 Ambivilance, Denial, and Adaptive Responses to Vulnerable Suspects in Police Custody: The New Limits of the Sovereign State

Authors: Faye Cosgrove, Donna Peacock

Abstract:

This paper examines current state strategies for dealing with vulnerable people in police custody and identifies the underpinning discourses and practices which inform these strategies. It has previously been argued that the state has utilised contradictory and conflicting responses to the control of crime, by employing opposing strategies of denial and adaptation in order to simultaneously both display sovereignty and disclaim responsibility. This paper argues that these contradictory strategies are still being employed in contemporary criminal justice, although the focus and the purpose have now shifted. The focus is upon the ‘vulnerable’ suspect, whose social identity is as incongruous, complex and contradictory as his social environment, and the purpose is to redirect attention away from negative state practices, whilst simultaneously displaying a compassionate and benevolent countenance in order to appeal to the voting public. The findings presented here result from intensive qualitative research with police officers, with health care professionals, and with civilian volunteers who work within police custodial environments. The data has been gathered over a three-year period and includes observational and interview data which has been thematically analysed to expose the underpinning mechanisms from which the properties of the system emerge. What is revealed is evidence of contemporary state practices of denial relating to the harms of austerity and the structural relations of vulnerability, whilst simultaneously adapting through processes of ‘othering’ of the vulnerable, ‘responsibilisation’ of citizens, defining deviance down through diversionary practices, and managing success through redefining the aims of the system. The ‘vulnerable’ suspect is subject to individual pathologising, and yet the nature of risk is aggregated. ‘Vulnerable’ suspects are supported in police custody by private citizens, by multi-agency partnerships, and by for-profit organisations, while the state seeks to collate and control services, and thereby to retain a veneer of control. Late modern ambivalence to crime control and the associated contradictory practices of abjuration and adjustment have extended to state responses to vulnerable suspects. The support available in the custody environment operates to control and minimise operational and procedural risk, rather than for the welfare of the detained person, and in fact, the support available is discovered to be detrimental to the very people that it claims to benefit. The ‘vulnerable’ suspect is now subject to the bifurcated logics employed at the new limits of the sovereign state.

Keywords: custody, policing, sovereign state, vulnerability

Procedia PDF Downloads 169
814 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 84
813 Exploring the Current Practice of Integrating Sustainability into the Social Studies and Citizenship Education Curriculum in the Saudi Educational Context

Authors: Aiydh Aljeddani, Fran Martin

Abstract:

The study mainly aims at exploring and understanding the current practice of social studies and citizenship education curriculum contribution to sustainability literacy and competency of the ninth and tenth grade students in the Saudi general education context. This study stems from a need for conducting research in general education contexts in order to prepare future graduate students who possess fundamental elements of education for sustainable development. To the best of our knowledge, the literature on education for sustainable development reveals that little research has been conducted so far on general education contexts and this study will add new knowledge in the literature. The study is interpretive in nature and employs a qualitative case study approach, and ethnography methodologies to understand deeply this complex educational phenomenon. 167 participants took part in this study, they were from six general education schools and made up of 25 teachers, and 142 students. Document analysis, semi-structured interviews, nominal group technique, and passive participant observation were used in order to gather the data for this study. The outcomes of the study showed the keenness of the Saudi government on promoting and raising awareness education for sustainable development among its younger generation via a sustainable development promoting curriculum. However, applying this vision in a real school setting, particularly via the social studies and citizenship education curriculum in grades nine and ten, has been challenging for different reasons as revealed by this study. First, incorporating sustainability in the social studies and citizenship education curriculum in the Saudi grade ninth and tenth grade, is based on the vision of the Saudi government but the ministry of education’s rules and regulations do not support it. Moreover, the circulars issued by the ministry are also not supportive of teachers and students efforts to implement a sustainable development education curriculum. Second, teachers, as members of this community who play a significant role in achieving the objectives of incorporating sustainability, are often seen as technicians and not as professional human beings. They are confined to the curriculum, the classroom and stripped of their will power by the school management and the educational administration. The subjects, who are students here, are also not prepared nor guided to achieve the objects. In addition, the tools mediated between subjects and objects are not convenient. There were some major challenges regarding the contradictions in incorporating sustainability processes such as demanding creativity from a teacher who is overloaded with tasks irrelevant to teaching and teachers’ training programs not meeting the teachers’ training needs.

Keywords: practice, integrating sustainability, curriculum, educational context

Procedia PDF Downloads 394
812 Unlocking Synergy: Exploring the Impact of Integrating Knowledge Management and Competitive Intelligence for Synergistic Advantage for Efficient, Inclusive and Optimum Organizational Performance

Authors: Godian Asami Mabindah

Abstract:

The convergence of knowledge management (KM) and competitive intelligence (CI) has gained significant attention in recent years as organizations seek to enhance their competitive advantage in an increasingly complex and dynamic business environment. This research study aims to explore and understand the synergistic relationship between KM and CI and its impact on organizational performance. By investigating how the integration of KM and CI practices can contribute to decision-making, innovation, and competitive advantage, this study seeks to unlock the potential benefits and challenges associated with this integration. The research employs a mixed-methods approach to gather comprehensive data. A quantitative analysis is conducted using survey data collected from a diverse sample of organizations across different industries. The survey measures the extent of integration between KM and CI practices and examines the perceived benefits and challenges associated with this integration. Additionally, qualitative interviews are conducted with key organizational stakeholders to gain deeper insights into their experiences, perspectives, and best practices regarding the synergistic relationship. The findings of this study are expected to reveal several significant outcomes. Firstly, it is anticipated that organizations that effectively integrate KM and CI practices will outperform those that treat them as independent functions. The study aims to highlight the positive impact of this integration on decision-making, innovation, organizational learning, and competitive advantage. Furthermore, the research aims to identify critical success factors and enablers for achieving constructive interaction between KM and CI, such as leadership support, culture, technology infrastructure, and knowledge-sharing mechanisms. The implications of this research are far-reaching. Organizations can leverage the findings to develop strategies and practices that facilitate the integration of KM and CI, leading to enhanced competitive intelligence capabilities and improved knowledge management processes. Additionally, the research contributes to the academic literature by providing a comprehensive understanding of the synergistic relationship between KM and CI and proposing a conceptual framework that can guide future research in this area. By exploring the synergies between KM and CI, this study seeks to help organizations harness their collective power to gain a competitive edge in today's dynamic business landscape. The research provides practical insights and guidelines for organizations to effectively integrate KM and CI practices, leading to improved decision-making, innovation, and overall organizational performance.

Keywords: Competitive Intelligence, Knowledge Management, Organizational Performance, Incusivity, Optimum Performance

Procedia PDF Downloads 93
811 Contribution at Dimensioning of the Energy Dissipation Basin

Authors: M. Aouimeur

Abstract:

The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.

Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment

Procedia PDF Downloads 584
810 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car

Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga

Abstract:

Starting from 2020, an EU-wide CO2-limitation of 95g/km is scheduled for the average of an OEMs passenger car fleet. Considering that, further measures of optimization on the diesel cycle will be necessary in order to reduce fuel consumption and emissions while keeping performance values adequate at the least. The present article deals with charge air cooling (CAC) on the basis of a diesel passenger car model in a 0D/1D-working process calculation environment. The considered engine is a 2.4 litre EURO VI diesel engine with variable geometry turbocharger (VGT) and low-pressure exhaust gas recirculation (LP EGR). The object of study was the impact of charge air cooling on the engine working process at constant boundary conditions which could have been conducted with an available and validated engine model in AVL BOOST. Part load was realized with constant power and NOx-emissions, whereas full load was accomplished with a lambda control in order to obtain maximum engine performance. The informative results were used to implement a simulation model in Matlab/Simulink which is further integrated into a full vehicle simulation environment via coupling with ICOS (Independent Co-Simulation Platform). Next, the dynamic engine behavior was validated and modified with load steps taken from the engine test bed. Due to the modular setup in the Co-Simulation, different CAC-models have been simulated quickly with their different influences on the working process. In doing so, a new cooler variation isn’t needed to be reproduced and implemented into the primary simulation model environment, but is implemented quickly and easily as an independent component into the simulation entity. By means of the association of the engine model, longitudinal dynamics vehicle model and different CAC models (air/air & water/air variants) in both steady state and transient operational modes, statements are gained regarding fuel consumption, NOx-emissions and power behavior. The fact that there is no more need of a complex engine model is very advantageous for the overall simulation volume. Beside of the simulation with the mentioned demonstrator engine, there have also been conducted several experimental investigations on the engine test bench. Here the comparison of a standard CAC with an intake-manifold-integrated CAC was executed in particular. Simulative as well as experimental tests showed benefits for the water/air CAC variant (on test bed especially the intake manifold integrated variant). The benefits are illustrated by a reduced pressure loss and a gain in air efficiency and CAC efficiency, those who all lead to minimized emission and fuel consumption for stationary and transient operation.

Keywords: air/water-charge air cooler, co-simulation, diesel working process, EURO VI fuel consumption

Procedia PDF Downloads 271
809 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico

Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba

Abstract:

In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.

Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems

Procedia PDF Downloads 134
808 Genetic Dissection of QTLs in Intraspecific Hybrids Derived from Muskmelon (Cucumis Melo L.) and Mangalore Melon (Cucumis Melo Var Acidulus) for Shelflife and Fruit Quality Traits

Authors: Virupakshi Hiremata, Ratnakar M. Shet, Raghavendra Gunnaiah, Prashantha A.

Abstract:

Muskmelon is a health-beneficial and refreshing dessert vegetable with a low shelf life. Mangalore melon, a genetic homeologue of muskmelon, has a shelf life of more than six months and is mostly used for culinary purposes. Understanding the genetics of shelf life, yield and yield-related traits and identification of markers linked to such traits is helpful in transfer of extended shelf life from Mangalore melon to the muskmelon through intra-specific hybridization. For QTL mapping, 276 F2 mapping population derived from the cross Arka Siri × SS-17 was genotyped with 40 polymorphic markers distributed across 12 chromosomes. The same population was also phenotyped for yield, shelf life and fruit quality traits. One major QTL (R2 >10) and fourteen minor QTLs (R2 <10) localized on four linkage groups, governing different traits were mapped in F2 mapping population developed from the intraspecific cross with a LOD > 5.5. The phenotypic varience explained by each locus varied from 3.63 to 10.97 %. One QTL was linked to shelf-life (qSHL-3-1), five QTLs were linked to TSS (qTSS-1-1, qTSS-3-3, qTSS-3-1, qTSS-3-2 and qTSS-1-2), two QTLs for flesh thickness (qFT-3-1, and qFT-3-2) and seven QTLs for fruit yield per vine (qFYV-3-1, qFYV-1-1, qFYV-3-1, qFYV1-1, qFYV-1-3, qFYV2-1 and qFYV6-1). QTL flanking markers may be used for marker assisted introgression of shelf life into muskmelon. Important QTL will be further fine-mapped for identifying candidate genes by QTLseq and RNAseq analysis. Fine-mapping of Important Quantitative Trait Loci (QTL) holds immense promise in elucidating the genetic basis of complex traits. Leveraging advanced techniques like QTLseq and RNA sequencing (RNA seq) is crucial for this endeavor. QTLseq combines next-generation sequencing with traditional QTL mapping, enabling precise identification of genomic regions associated with traits of interest. Through high-throughput sequencing, QTLseq provides a detailed map of genetic variations linked to phenotypic variations, facilitating targeted investigations. Moreover, RNA seq analysis offers a comprehensive view of gene expression patterns in response to specific traits or conditions. By comparing transcriptomes between contrasting phenotypes, RNA seq aids in pinpointing candidate genes underlying QTL regions. Integrating QTLseq with RNA seq allows for a multi-dimensional approach, coupling genetic variation with gene expression dynamics.

Keywords: QTL, shelf life, TSS, muskmelon and Mangalore melon

Procedia PDF Downloads 54
807 Testing of Canadian Integrated Healthcare and Social Services Initiatives with an Evidence-Based Case Definition for Healthcare and Social Services Integrations

Authors: S. Cheng, C. Catallo

Abstract:

Introduction: Canada's healthcare and social services systems are failing high risk, vulnerable older adults. Care for vulnerable older Canadians (65 and older) is not optimal in Canada. It does not address the care needs of vulnerable, high risk adults using a holistic approach. Given the growing aging population, and the care needs for seniors with complex conditions is one of the highest in Canada's health care system, there is a sense of urgency to optimize care. Integration of health and social services is an emerging trend in Canada when compared to European countries. There is no common and universal understanding of healthcare and social services integration within the country. Consequently, a clear understanding and definition of integrated health and social services are absent in Canada. Objectives: A study was undertaken to develop a case definition for integrated health and social care initiatives that serve older adults, which was then tested against three Canadian integrated initiatives. Methodology: A limited literature review was undertaken to identify common characteristics of integrated health and social care initiatives that serve older adults, and comprised both scientific and grey literature, in order to develop a case definition. Three Canadian integrated initiatives that are located in the province of Ontario, were identified using an online search and a screening process. They were surveyed to determine if the literature-based integration definition applied to them. Results: The literature showed that there were 24 common healthcare and social services integration characteristics that could be categorized into ten themes: 1) patient-care approach; 2) program goals; 3) measurement; 4) service and care quality; 5) accountability and responsibility; 6) information sharing; 7) Decision-making and problem-solving; 8) culture; 9) leadership; and 10) staff and professional interaction. The three initiatives showed agreement on all the integration characteristics except for those characteristics associated with healthcare and social care professional interaction, collaborative leadership and shared culture. This disagreement may be due to several reasons, including the existing governance divide between the healthcare and social services sectors within the province of Ontario that has created a ripple effect in how professions in the two different sectors interact. In addition, the three initiatives may be at maturing levels of integration, which may explain disagreement on the characteristics associated with leadership and culture. Conclusions: The development of a case definition for healthcare and social services integration that incorporates common integration characteristics can act as a useful instrument in identifying integrated healthcare and social services, particularly given the emerging and evolutionary state of this phenomenon within Canada.

Keywords: Canada, case definition, healthcare and social services integration, integration, seniors health, services delivery

Procedia PDF Downloads 160
806 Thorium Resources of Georgia – Is It Its Future Energy ?

Authors: Avtandil Okrostsvaridze, Salome Gogoladze

Abstract:

In the light of exhaustion of hydrocarbon reserves of new energy resources, its search is of vital importance problem for the modern civilization. At the time of energy resource crisis, the radioactive element thorium (232Th) is considered as the main energy resource for the future of our civilization. Modern industry uses thorium in high-temperature and high-tech tools, but the most important property of thorium is that like uranium it can be used as fuel in nuclear reactors. However, thorium has a number of advantages compared to this element: Its concentration in the earth crust is 4-5 times higher than uranium; extraction and enrichment of thorium is much cheaper than of uranium; it is less radioactive; its waste products complete destruction is possible; thorium yields much more energy than uranium. Nowadays, developed countries, among them India and China, have started intensive work for creation of thorium nuclear reactors and intensive search for thorium reserves. It is not excluded that in the next 10 years these reactors will completely replace uranium reactors. Thorium ore mineralization is genetically related to alkaline-acidic magmatism. Thorium accumulations occur as in endogen marked as in exogenous conditions. Unfortunately, little is known about the reserves of this element in Georgia, as planned prospecting-exploration works of thorium have never been carried out here. Although, 3 ore occurrences of this element are detected: 1) In the Greater Caucasus Kakheti segment, in the hydrothermally altered rocks of the Lower Jurassic clay-shales, where thorium concentrations varied between 51 - 3882g/t; 2) In the eastern periphery of the Dzirula massif, in the hydrothermally alteration rocks of the cambrian quartz-diorite gneisses, where thorium concentrations varied between 117-266 g/t; 3) In active contact zone of the Eocene volcanites and syenitic intrusive in Vakijvari ore field of the Guria region, where thorium concentrations varied between 185 – 428 g/t. In addition, geological settings of the areas, where thorium occurrences were fixed, give a theoretical basis on possible accumulation of practical importance thorium ores. Besides, the Black Sea Guria region magnetite sand which is transported from Vakijvari ore field, should contain significant reserves of thorium. As the research shows, monazite (thorium containing mineral) is involved in magnetite in the form of the thinnest inclusions. The world class thorium deposit concentrations of this element vary within the limits of 50-200 g/t. Accordingly, on the basis of these data, thorium resources found in Georgia should be considered as perspective ore deposits. Generally, we consider that complex investigation of thorium should be included into the sphere of strategic interests of the state, because future energy of Georgia, will probably be thorium.

Keywords: future energy, Georgia, ore field, thorium

Procedia PDF Downloads 494
805 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 152
804 A New Measurement for Assessing Constructivist Learning Features in Higher Education: Lifelong Learning in Applied Fields (LLAF) Tempus Project

Authors: Dorit Alt, Nirit Raichel

Abstract:

Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities.This form of instruction is criticized for encouraging students to acquire inert knowledge that can be used in instructional settings at best, however cannot be transferred into real-life complex problem settings. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium (including 16 members from 8 countries) is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies and assessment tools, based on the constructivist approach for learning that put a premium on adaptability to the emerging requirements of present society. This presentation will be limited to teachers' education only and to the contribution of the project in providing a scale designed to measure the extent to which the constructivist activities are efficiently applied in the learning environment. A mix-method approach was implemented in two phases to construct the scale: The first phase included a qualitative content analysis involving both deductive and inductive category applications of students' observations. The results foregrounded eight categories: knowledge construction, authenticity, multiple perspectives, prior knowledge, in-depth learning, teacher- student interaction, social interaction and cooperative dialogue. The students' descriptions of their classes were formulated as 36 items. The second phase employed structural equation modeling (SEM). The scale was submitted to 597 undergraduate students. The goodness of fit of the data to the structural model yielded sufficient fit results. This research elaborates the body of literature by adding a category of in-depth learning which emerged from the content analysis. Moreover, the theoretical category of social activity has been extended to include two distinctive factors: cooperative dialogue and social interaction. Implications of these findings for the LLAF project are discussed.

Keywords: constructivist learning, higher education, mix-methodology, structural equation modeling

Procedia PDF Downloads 315
803 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 285
802 Evaluation of the Energy Performance and Emissions of an Aircraft Engine: J69 Using Fuel Blends of Jet A1 and Biodiesel

Authors: Gabriel Fernando Talero Rojas, Vladimir Silva Leal, Camilo Bayona-Roa, Juan Pava, Mauricio Lopez Gomez

Abstract:

The substitution of conventional aviation fuels with biomass-derived alternative fuels is an emerging field of study in the aviation transport, mainly due to its energy consumption, the contribution to the global Greenhouse Gas - GHG emissions and the fossil fuel price fluctuations. Nevertheless, several challenges remain as the biofuel production cost and its degradative effect over the fuel systems that alter the operating safety. Moreover, experimentation on full-scale aeronautic turbines are expensive and complex, leading to most of the research to the testing of small-size turbojets with a major absence of information regarding the effects in the energy performance and the emissions. The main purpose of the current study is to present the results of experimentation in a full-scale military turbojet engine J69-T-25A (presented in Fig. 1) with 640 kW of power rating and using blends of Jet A1 with oil palm biodiesel. The main findings are related to the thrust specific fuel consumption – TSFC, the engine global efficiency – η, the air/fuel ratio – AFR and the volume fractions of O2, CO2, CO, and HC. Two fuels are used in the present study: a commercial Jet A1 and a Colombian palm oil biodiesel. The experimental plan is conducted using the biodiesel volume contents - w_BD from 0 % (B0) to 50 % (B50). The engine operating regimes are set to Idle, Cruise, and Take-off conditions. The turbojet engine J69 is used by the Colombian Air Force and it is installed in a testing bench with the instrumentation that corresponds to the technical manual of the engine. The increment of w_BD from 0 % to 50 % reduces the η near 3,3 % and the thrust force in a 26,6 % at Idle regime. These variations are related to the reduction of the 〖HHV〗_ad of the fuel blend. The evolved CO and HC tend to be reduced in all the operating conditions when increasing w_BD. Furthermore, a reduction of the atomization angle is presented in Fig. 2, indicating a poor atomization in the fuel nozzle injectors when using a higher biodiesel content as the viscosity of fuel blend increases. An evolution of cloudiness is also observed during the shutdown procedure as presented in Fig. 3a, particularly after 20 % of biodiesel content in the fuel blend. This promotes the contamination of some components of the combustion chamber of the J69 engine with soot and unburned matter (Fig. 3). Thus, the substitution of biodiesel content above 20 % is not recommended in order to avoid a significant decrease of η and the thrust force. A more detail examination of the mechanical wearing of the main components of the engine is advised in further studies.

Keywords: aviation, air to fuel ratio, biodiesel, energy performance, fuel atomization, gas turbine

Procedia PDF Downloads 110
801 Early Buddhist History in Architecture before Sui Dynasty

Authors: Yin Ruoxi

Abstract:

During the Eastern Han to Three Kingdoms period, Buddhism had not yet received comprehensive support from the ruling class, and its dissemination remained relatively limited. Based on existing evidence, Buddhist architecture was primarily concentrated in regions central to scripture translation and cultural exchange with the Western Regions, such as Luoyang, Pengcheng, and Guangling. The earliest Buddhist structures largely adhered to the traditional forms of ancient Indian architecture. The frequent wars of the late Western Jin and Sixteen Kingdoms periods compelled the Central Plains culture to interact with other civilizations. As a result, Buddhist architecture gradually integrated characteristics of Central Asian, ancient Indian, and native Chinese styles. In the Northern and Southern Dynasties, Buddhism gained formal support from rulers, leading to the establishment of numerous temples across the Central Plains. The prevalence of warfare, combined with the emergence of Wei-Jin reclusive thought and Buddhism’s own ascetic philosophy, gave rise to mountain temples. Additionally, the eastward spread of rock-cut cave architecture along the Silk Road accelerated the development of such mountain temples. Temple layouts also became increasingly complex with the deeper translation of Buddhist scriptures and the influence of traditional Chinese architectural concepts. From the earliest temples, where the only Buddhist structure was the temple itself, to layouts centered on the stupa with a "front stupa, rear hall" arrangement, and finally to Mahavira Halls becoming the sacred focal point, temple design evolved significantly. The grand halls eventually matched the scale of the central halls in imperial palaces, reflecting the growing deification of the Buddha in the public imagination. The multi-storied wooden pagoda exemplifies Buddhism’s remarkable adaptability during its early introduction to the Central Plains, while the dense- eaved pagoda represents a synthesis of Gandharan stupas, Central Asian temple shrines, ancient Indian devalaya, and Chinese multi-storied pavilions. This form demonstrates Buddhism’s ability to absorb features from diverse cultures during its dissemination. Through its continuous interaction with various cultures, Buddhist architecture achieved sustained development in both form and meaning, laying a solid foundation for the establishment and growth of Buddhism across different regions.

Keywords: Buddhism, buddhist architecture, pagoda, temple, South Asian Buddhism, Chinese Buddhism

Procedia PDF Downloads 14
800 Semi-Autonomous Surgical Robot for Pedicle Screw Insertion on ex vivo Bovine Bone: Improved Workflow and Real-Time Process Monitoring

Authors: Robnier Reyes, Andrew J. P. Marques, Joel Ramjist, Chris R. Pasarikovski, Victor X. D. Yang

Abstract:

Over the past three decades, surgical robotic systems have demonstrated their ability to improve surgical outcomes. The LBR Med is a collaborative robotic arm that is meant to work with a surgeon to streamline surgical workflow. It has 7 degrees of freedom and thus can be easily oriented. Position and torque sensors at each joint allow it to maintain a position accuracy of 150 µm with real-time force and torque feedback, making it ideal for complex surgical procedures. Spinal fusion procedures involve the placement of as many as 20 pedicle screws, requiring a great deal of accuracy due to proximity to the spinal canal and surrounding vessels. Any deviation from intended path can lead to major surgical complications. Assistive surgical robotic systems are meant to serve as collaborative devices easing the workload of the surgeon, thereby improving pedicle screw placement by mitigating fatigue related inaccuracies. Moreover, robotic spinal systems have shown marked improvements over conventional freehanded techniques in both screw placement accuracy and fusion quality and have greatly reduced the need for screw revision, intraoperatively and post-operatively. However, current assistive spinal fusion robots, such as the ROSA Spine, are limited in functionality to positioning surgical instruments. While they offer a small degree of improvement in pedicle screw placement accuracy, they do not alleviate surgeon fatigue, nor do they provide real-time force and torque feedback during screw insertion. We propose a semi-autonomous surgical robot workflow for spinal fusion where the surgeon guides the robot to its initial position and orientation, and the robot drives the pedicle screw accurately into the vertebra. Here, we demonstrate feasibility by inserting pedicle screws into ex-vivo bovine rib bone. The robot monitors position, force and torque with respect to predefined values selected by the surgeon to ensure the highest possible spinal fusion quality. The workflow alleviates the strain on the surgeon by having the robot perform the screw placement while the ability to monitor the process in real-time keeps the surgeon in the system loop. The approach we have taken in terms of level autonomy for the robot reflects its ability to safely collaborate with the surgeon in the operating room without external navigation systems.

Keywords: ex vivo bovine bone, pedicle screw, surgical robot, surgical workflow

Procedia PDF Downloads 170
799 Quantifying Processes of Relating Skills in Learning: The Map of Dialogical Inquiry

Authors: Eunice Gan Ghee Wu, Marcus Goh Tian Xi, Alicia Chua Si Wen, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

The Map of Dialogical Inquiry provides a conceptual basis of learning processes. According to the Map, dialogical inquiry motivates complex thinking, dialogue, reflection, and learner agency. For instance, classrooms that incorporated dialogical inquiry enabled learners to construct more meaning in their learning, to engage in self-reflection, and to challenge their ideas with different perspectives. While the Map contributes to the psychology of learning, its qualitative approach makes it hard to track and compare learning processes over time for both teachers and learners. Qualitative approach typically relies on open-ended responses, which can be time-consuming and resource-intensive. With these concerns, the present research aimed to develop and validate a quantifiable measure for the Map. Specifically, the Map of Dialogical Inquiry reflects the eight different learning processes and perspectives employed during a learner’s experience. With a focus on interpersonal and emotional learning processes, the purpose of the present study is to construct and validate a scale to measure the “Relating” aspect of learning. According to the Map, the Relating aspect of learning contains four conceptual components: using intuition and empathy, seeking personal meaning, building relationships and meaning with others, and likes stories and metaphors. All components have been shown to benefit learning in past research. This research began with a literature review with the goal of identifying relevant scales in the literature. These scales were used as a basis for item development, guided by the four conceptual dimensions in the “Relating” aspect of learning, resulting in a pool of 47 preliminary items. Then, all items were administered to 200 American participants via an online survey along with other scales of learning. Dimensionality, reliability, and validity of the “Relating” scale was assessed. Data were submitted to a confirmatory factor analysis (CFA), revealing four distinct components and items. Items with lower factor loadings were removed in an iterative manner, resulting in 34 items in the final scale. CFA also revealed that the “Relating” scale was a four-factor model, following its four distinct components as described in the Map of Dialogical Inquiry. In sum, this research was able to develop a quantitative scale for the “Relating” aspect of the Map of Dialogical Inquiry. By representing learning as numbers, users, such as educators and learners, can better track, evaluate, and compare learning processes over time in an efficient manner. More broadly, this scale may also be used as a learning tool in lifelong learning.

Keywords: lifelong learning, scale development, dialogical inquiry, relating, social and emotional learning, socio-affective intuition, empathy, narrative identity, perspective taking, self-disclosure

Procedia PDF Downloads 144
798 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 108
797 Association between Maternal Personality and Postnatal Mother-to-Infant Bonding

Authors: Tessa Sellis, Marike A. Wierda, Elke Tichelman, Mirjam T. Van Lohuizen, Marjolein Berger, François Schellevis, Claudi Bockting, Lilian Peters, Huib Burger

Abstract:

Introduction: Most women develop a healthy bond with their children, however, adequate mother-to-infant bonding cannot be taken for granted. Mother-to-infant bonding refers to the feelings and emotions experienced by the mother towards her child. It is an ongoing process that starts during pregnancy and develops during the first year postpartum and likely throughout early childhood. The prevalence of inadequate bonding ranges from 7 to 11% in the first weeks postpartum. An impaired mother-to-infant bond can cause long-term complications for both mother and child. Very little research has been conducted on the direct relationship between the personality of the mother and mother-to-infant bonding. This study explores the associations between maternal personality and postnatal mother-to-infant bonding. The main hypothesis is that there is a relationship between neuroticism and mother-to-infant bonding. Methods: Data for this study were used from the Pregnancy Anxiety and Depression Study (2010-2014), which examined symptoms of and risk factors for anxiety or depression during pregnancy and the first year postpartum of 6220 pregnant women who received primary, secondary or tertiary care in the Netherlands. The study was expanded in 2015 to investigate postnatal mother-to-infant bonding. For the current research 3836 participants were included. During the first trimester of gestation, baseline characteristics, as well as personality, were measured through online questionnaires. Personality was measured by the NEO Five Factor Inventory (NEO-FFI), which covers the big five of personality (neuroticism, extraversion, openness, altruism and conscientiousness). Mother-to-infant bonding was measured postpartum by the Postpartum Bonding Questionnaire (PBQ). Univariate linear regression analysis was performed to estimate the associations. Results: 5% of the PBQ-respondents reported impaired bonding. A statistically significant association was found between neuroticism and mother-to-infant bonding (p < .001): mothers scoring higher on neuroticism, reported a lower score on mother-to-infant bonding. In addition, a positive correlation was found between the personality traits extraversion (b: -.081), openness (b: -.014), altruism (b: -.067), conscientiousness (b: -.060) and mother-to-infant bonding. Discussion: This study is one of the first to demonstrate a direct association between the personality of the mother and mother-to-infant bonding. A statistically significant relationship has been found between neuroticism and mother-to-infant bonding, however, the percentage of variance predictable by a personality dimension is very small. This study has examined one part of the multi-factorial topic of mother-to-infant bonding and offers more insight into the rarely investigated and complex matter of mother-to-infant bonding. For midwives, it is important recognize the risks for impaired bonding and subsequently improve policy for women at risk.

Keywords: mother-to-infant bonding, personality, postpartum, pregnancy

Procedia PDF Downloads 365
796 Influence of Microparticles in the Contact Region of Quartz Sand Grains: A Micro-Mechanical Experimental Study

Authors: Sathwik Sarvadevabhatla Kasyap, Kostas Senetakis

Abstract:

The mechanical behavior of geological materials is very complex, and this complexity is related to the discrete nature of soils and rocks. Characteristics of a material at the grain scale such as particle size and shape, surface roughness and morphology, and particle contact interface are critical to evaluate and better understand the behavior of discrete materials. This study investigates experimentally the micro-mechanical behavior of quartz sand grains with emphasis on the influence of the presence of microparticles in their contact region. The outputs of the study provide some fundamental insights on the contact mechanics behavior of artificially coated grains and can provide useful input parameters in the discrete element modeling (DEM) of soils. In nature, the contact interfaces between real soil grains are commonly observed with microparticles. This is usually the case of sand-silt and sand-clay mixtures, where the finer particles may create a coating on the surface of the coarser grains, altering in this way the micro-, and thus the macro-scale response of geological materials. In this study, the micro-mechanical behavior of Leighton Buzzard Sand (LBS) quartz grains, with interference of different microparticles at their contact interfaces is studied in the laboratory using an advanced custom-built inter-particle loading apparatus. Special techniques were adopted to develop the coating on the surfaces of the quartz sand grains so that to establish repeatability of the coating technique. The characterization of the microstructure of coated particles on their surfaces was based on element composition analyses, microscopic images, surface roughness measurements, and single particle crushing strength tests. The mechanical responses such as normal and tangential load – displacement behavior, tangential stiffness behavior, and normal contact behavior under cyclic loading were studied. The behavior of coated LBS particles is compared among different classes of them and with pure LBS (i.e. surface cleaned to remove any microparticles). The damage on the surface of the particles was analyzed using microscopic images. Extended displacements in both normal and tangential directions were observed for coated LBS particles due to the plastic nature of the coating material and this varied with the variation of the amount of coating. The tangential displacement required to reach steady state was delayed due to the presence of microparticles in the contact region of grains under shearing. Increased tangential loads and coefficient of friction were observed for the coated grains in comparison to the uncoated quartz grains.

Keywords: contact interface, microparticles, micro-mechanical behavior, quartz sand

Procedia PDF Downloads 192
795 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 122
794 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 67
793 Analysis of Fuel Adulteration Consequences in Bangladesh

Authors: Mahadehe Hassan

Abstract:

In most countries manufacturing, trading and distribution of gasoline and diesel fuels belongs to the most important sectors of national economy. For Bangladesh, a robust, well-functioning, secure and smartly managed national fuel distribution chain is an essential precondition for achieving Government top priorities in development and modernization of transportation infrastructure, protection of national environment and population health as well as, very importantly, securing due tax revenue for the State Budget. Bangladesh is a developing country with complex fuel supply network, high fuel taxes incidence and – till now - limited possibilities in application of modern, automated technologies for Government national fuel market control. Such environment allows dishonest physical and legal persons and organized criminals to build and profit from illegal fuel distribution schemes and fuel illicit trade. As a result, the market transparency and the country attractiveness for foreign investments, law-abiding economic operators, national consumers, State Budget and the Government ability to finance development projects, and the country at large suffer significantly. Research shows that over 50% of retail petrol stations in major agglomerations of Bangladesh sell adulterated fuels and/or cheat customers on the real volume of the fuel pumped into their vehicles. Other forms of detected fuel illicit trade practices include misdeclaration of fuel quantitative and qualitative parameters during internal transit and selling of non-declared and smuggled fuels. The aim of the study is to recommend the implementation of a National Fuel Distribution Integrity Program (FDIP) in Bangladesh to address and resolve fuel adulteration and illicit trade problems. The program should be customized according to the specific needs of the country and implemented in partnership with providers of advanced technologies. FDIP should enable and further enhance capacity of respective Bangladesh Government authorities in identification and elimination of all forms of fuel illicit trade swiftly and resolutely. FDIP high-technology, IT and automation systems and secure infrastructures should be aimed at the following areas (1) fuel adulteration, misdeclaration and non-declaration; (2) fuel quality and; (3) fuel volume manipulation at retail level. Furthermore, overall concept of FDIP delivery and its interaction with the reporting and management systems used by the Government shall be aligned with and support objectives of the Vision 2041 and Smart Bangladesh Government programs.

Keywords: fuel adulteration, octane, kerosene, diesel, petrol, pollution, carbon emissions

Procedia PDF Downloads 78
792 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform

Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy

Abstract:

A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.

Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing

Procedia PDF Downloads 174
791 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 177
790 Identification of Viruses Infecting Garlic Plants in Colombia

Authors: Diana M. Torres, Anngie K. Hernandez, Andrea Villareal, Magda R. Gomez, Sadao Kobayashi

Abstract:

Colombian Garlic crops exhibited mild mosaic, yellow stripes, and deformation. This group of symptoms suggested a viral infection. Several viruses belonging to the genera Potyvirus, Carlavirus and Allexivirus are known to infect garlic and lower their yield worldwide, but in Colombia, there are no studies of viral infections in this crop, only leek yellow stripe virus (LYSV) has been reported to our best knowledge. In Colombia, there are no management strategies for viral diseases in garlic because of the lack of information about viral infections on this crop, which is reflected in (i) high prevalence of viral related symptoms in garlic fields and (ii) high dispersal rate. For these reasons, the purpose of the present study was to evaluate the viral status of garlic in Colombia, which can represent a major threat on garlic yield and quality for this country 55 symptomatic leaf samples were collected for virus detection by RT-PCR and mechanical inoculation. Total RNA isolated from infected samples were subjected to RT-PCR with primers 1-OYDV-G/2-OYDV-G for Onion yellow dwarf virus (OYDV) (expected size 774pb), 1LYSV/2LYSV for LYSV (expected size 1000pb), SLV 7044/SLV 8004 for Shallot latent virus (SLV) (expected size 960pb), GCL-N30/GCL-C40 for Garlic common latent virus (GCLV) (expected size 481pb) and EF1F/EF1R for internal control (expected size 358pb). GCLV, SLV, and LYSV were detected in infected samples; in 95.6% of the analyzed samples was detected at least one of the viruses. GCLV and SLV were detected in single infection with low prevalence (9.3% and 7.4%, respectively). Garlic generally becomes coinfected with several types of viruses. Four viral complexes were identified: three double infection (64% of analyzed samples) and one triple infection (15%). The most frequent viral complex was SLV + GCLV infecting 48.1% of the samples. The other double complexes identified had a prevalence of 7% (GCLV + LYSV and SLV + LYSV) and 5.6% of the samples were free from these viruses. Mechanical transmission experiments were set up using leaf tissues of collected samples from infected fields, different test plants were assessed to know the host range, but it was restricted to C. quinoa, confirming the presence of detected viruses which have limited host range and were detected in C. quinoa by RT-PCR. The results of molecular and biological tests confirm the presence of SLV, LYSV, and GCLV; this is the first report of SLV and LYSV in garlic plants in Colombia, which can represent a serious threat for this crop in this country.

Keywords: SLV, GCLV, LYSV, leek yellow stripe virus, Allium sativum

Procedia PDF Downloads 148
789 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 204