Search results for: complex filament
4438 The Development of an Accident Causation Model Specific to Agriculture: The Irish Farm Accident Causation Model
Authors: Carolyn Scott, Rachel Nugent
Abstract:
The agricultural industry in Ireland and worldwide is one of the most dangerous occupations with respect to occupational health and safety accidents and fatalities. Many accident causation models have been developed in safety research to understand the underlying and contributory factors that lead to the occurrence of an accident. Due to the uniqueness of the agricultural sector, current accident causation theories cannot be applied. This paper presents an accident causation model named the Irish Farm Accident Causation Model (IFACM) which has been specifically tailored to the needs of Irish farms. The IFACM is a theoretical and practical model of accident causation that arranges the causal factors into a graphic representation of originating, shaping, and contributory factors that lead to accidents when unsafe acts and conditions are created that are not rectified by control measures. Causes of farm accidents were assimilated by means of a thorough literature review and were collated to form a graphical representation of the underlying causes of a farm accident. The IFACM was validated retrospectively through case study analysis and peer review. Participants in the case study (n=10) identified causes that led to a farm accident in which they were involved. A root cause analysis was conducted to understand the contributory factors surrounding the farm accident, traced back to the ‘root cause’. Experts relevant to farm safety accident causation in the agricultural industry have peer reviewed the IFACM. The accident causation process is complex. Accident prevention requires a comprehensive understanding of this complex process because to prevent the occurrence of accidents, the causes of accidents must be known. There is little research on the key causes and contributory factors of unsafe behaviours and accidents on Irish farms. The focus of this research is to gain a deep understanding of the causality of accidents on Irish farms. The results suggest that the IFACM framework is helpful for the analysis of the causes of accidents within the agricultural industry in Ireland. The research also suggests that there may be international applicability if further research is carried out. Furthermore, significant learning can be obtained from considering the underlying causes of accidents.Keywords: farm safety, farm accidents, accident causation, root cause analysis
Procedia PDF Downloads 784437 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems
Authors: Ramprasad Srinivasan
Abstract:
Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation
Procedia PDF Downloads 664436 Analysis of Long-Term Response of Seawater to Change in CO₂, Heavy Metals and Nutrients Concentrations
Authors: Igor Povar, Catherine Goyet
Abstract:
The seawater is subject to multiple external stressors (ES) including rising atmospheric CO2 and ocean acidification, global warming, atmospheric deposition of pollutants and eutrophication, which deeply alter its chemistry, often on a global scale and, in some cases, at the degree significantly exceeding that in the historical and recent geological verification. In ocean systems the micro- and macronutrients, heavy metals, phosphor- and nitrogen-containing components exist in different forms depending on the concentrations of various other species, organic matter, the types of minerals, the pH etc. The major limitation to assessing more strictly the ES to oceans, such as pollutants (atmospheric greenhouse gas, heavy metals, nutrients as nitrates and phosphates) is the lack of theoretical approach which could predict the ocean resistance to multiple external stressors. In order to assess the abovementioned ES, the research has applied and developed the buffer theory approach and theoretical expressions of the formal chemical thermodynamics to ocean systems, as heterogeneous aqueous systems. The thermodynamic expressions of complex chemical equilibria, involving acid-base, complex formation and mineral ones have been deduced. This thermodynamic approach utilizes thermodynamic relationships coupled with original mass balance constraints, where the solid phases are explicitly expressed. The ocean sensitivity to different external stressors and changes in driving factors are considered in terms of derived buffering capacities or buffer factors for heterogeneous systems. Our investigations have proved that the heterogeneous aqueous systems, as ocean and seas are, manifest their buffer properties towards all their components, not only to pH, as it has been known so far, for example in respect to carbon dioxide, carbonates, phosphates, Ca2+, Mg2+, heavy metal ions etc. The derived expressions make possible to attribute changes in chemical ocean composition to different pollutants. These expressions are also useful for improving the current atmosphere-ocean-marine biogeochemistry models. The major research questions, to which the research responds, are: (i.) What kind of contamination is the most harmful for Future Ocean? (ii.) What are chemical heterogeneous processes of the heavy metal release from sediments and minerals and its impact to the ocean buffer action? (iii.) What will be the long-term response of the coastal ocean to the oceanic uptake of anthropogenic pollutants? (iv.) How will change the ocean resistance in terms of future chemical complex processes and buffer capacities and its response to external (anthropogenic) perturbations? The ocean buffer capacities towards its main components are recommended as parameters that should be included in determining the most important ocean factors which define the response of ocean environment at the technogenic loads increasing. The deduced thermodynamic expressions are valid for any combination of chemical composition, or any of the species contributing to the total concentration, as independent state variable.Keywords: atmospheric greenhouse gas, chemical thermodynamics, external stressors, pollutants, seawater
Procedia PDF Downloads 1434435 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China
Authors: Jiujie Cai, Fengxia LI, Haibo Wang
Abstract:
After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment
Procedia PDF Downloads 1274434 The Post-Crisis Expansion of European Central Bank Powers: Understanding the Legitimate Boundaries of the ECB's Supervisory Independence and Accountability
Authors: Jakub Gren
Abstract:
The recent transfer of banking supervision to the ECB has expanded its influence as of a non-majoritarian and technocratic policy-shaper in EU supervisory policies. To fulfil the main policy objectives of the Single Supervisory Mechanism, the ECB has been tasked with building a single supervisory approach to supervised banks across the euro area and is now exclusively responsible for direct supervision of the largest ‘significant’ euro area banks and the oversight of the remaining ‘less significant’ banks. This enhanced supranational position of the ECB significantly alters the EU institutional order and creates powerful incentives to actively pursue integrationist agenda by the ECB. However, this drastic shift has a little impact upon adapting the ECB’s new supervisory mandate to the requirements of democratic legitimacy. Whereas the ECB’s strong pre-crisis independence and limited accountability could be reconciled with democratic principles through a clearly articulated price stability mandate, independence and limited accountability in the context of a more complex supervisory mandate is problematic. Hence, in order to ensure the democratic legitimacy of the ECB/SSM’s supervisory policies, the ECB’s supervisory mandate requires both a lower scope of independence and higher accountability requirements. To address this situation, organizational separation (“Chinese Wall”) between the ECB monetary and supervisory arms was introduced. This separation includes different reporting lines and the relocation of the ECB’s monetary function to a new building complex while leaving its supervisory function at the Euro-tower (“Two Towers”). This paper argues that these measures are not sufficient to establish proper checks and balances on the ECB’s powers to pursue euro zone’s wide supervisory policies. As a remedy, this contribution suggests that the ECB’s Treaties-embedded independence, as set out by art. 130 TFEU, designed to carry out its monetary function shall not be fully applicable to its supervisory function. Indeed functional and conditional reading of this provision to ECB supervisory function could enhance the legitimacy of future ECB’s supervisory action.Keywords: accountability and transparency, democratic governance, financial management, rule of law
Procedia PDF Downloads 2074433 An Experience on Urban Regeneration: A Case Study of Isfahan, Iran
Authors: Sedigheh Kalantari, Yaping Huang
Abstract:
The historic area of cities has experienced different phases of transformation. The beginning of the twentieth century, modernism, and modern development changed the integrated pattern of change and the historic urban quarter were regarded as subject comprehensive redevelopment. In this respect, historic area of Iranian cities have not been safe from these changes and affected by widespread evolutions; in particular after Islamic Revolution eras (1978) cities have traveled through an evolution in conservation and development policies and practices. Moreover, moving toward a specific approach and specific attention paid to the regeneration of the historical urban centers in Iran has started since the 1990s. This reveals the great importance attached to the historical centers of cities. This paper is an approach to examine an experience on urban regeneration in Iran through a case study. The study relies on multiple source of evidence. The use of multiple sources of evidence can help substantially improve the validity and reliability of the research. The empirical core of this research, therefore, rests in the process of urban revitalization of the old square in Isfahan. Isfahan is one of the oldest city of Persia. The historic area of city encompasses a large number of valuable buildings and monuments. One of the cultural and historical region of Isfahan is Atiq Square (Old Square). It has been the backbone node of the city that in course of time has being ignored more and more and transformed negatively. The complex had suffered from insufficiencies especially with respect to social and spatial aspects. Therefore, reorganization of that complex as the main and most important urban center of Isfahan became an inevitable issue; So this paper except from reminding the value of such historic-cultural heritage and review of its transformation, focused on an experience of urban revitalization project in this heritage site. The outcome of this research shows that situated in different socio-economic political and historical contexts and in face of different urban regeneration issues, Iran have displayed significant differences in the way of urban regeneration.Keywords: historic area, Iran, urban regeneration, revitalization
Procedia PDF Downloads 2574432 Obtaining of Nanocrystalline Ferrites and Other Complex Oxides by Sol-Gel Method with Participation of Auto-Combustion
Authors: V. S. Bushkova
Abstract:
It is well known that in recent years magnetic materials have received increased attention due to their properties. For this reason a significant number of patents that were published during the last decade are oriented towards synthesis and study of such materials. The aim of this work is to create and study ferrite nanocrystalline materials with spinel structure, using sol-gel technology with participation of auto-combustion. This method is perspective in that it is a cheap and low-temperature technique that allows for the fine control on the product’s chemical composition.Keywords: magnetic materials, ferrites, sol-gel technology, nanocrystalline powders
Procedia PDF Downloads 4094431 Brain-Computer Interfaces That Use Electroencephalography
Authors: Arda Ozkurt, Ozlem Bozkurt
Abstract:
Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.Keywords: BCI, EEG, non-invasive, spatial resolution
Procedia PDF Downloads 714430 Nabokov’s Lolita: Externalization of Contemporary Mind in the Configuration of Hedonistic Aesthetics
Authors: Saima Murtaza
Abstract:
Ethics and aesthetics have invariably remained the two closely integrated artistic appurtenances for the production of any work of art. These artistic devices configure themselves into a complex synthesis in our contemporary literature. The labyrinthine integration of ethics and aesthetics, operating in the lives of human characters, to the extent of transcending all limits has resulted in an artistic puzzle for the readers. Art, no doubt, is an extrinsic expression of the intrinsic life of man. The use of aesthetics in literature pertaining to human existence; aesthetic solipsism, has resulted in the artistic objectification of these characters. The practice of the like aestheticism deprives the characters of their souls, rendering them as mere objects of aesthetic gaze at the hands of their artists-creators. Artists orchestrate their lives founding it on a plot which deviates from normal social and ethical standards. Their perverse attitude can be seen in dealing with characters, their feelings and the incidents of their lives. Morality is made to appear not as a religious construct but as an individual’s private affair. Furthermore, the idea of beauty incarnated, in other words hedonistic aesthetic does not placate a true aesthete. Ethics and aesthetics are the two most recurring motifs of our contemporary literature, especially of Nabokov’s world. The purpose of this study is to peruse these aforementioned motifs in Nabokov’s most enigmatic novel Lolita, a story of pedophilia, which is in fact reflective of our complex individual psychic and societal patterns. The narrative subverts all the traditional and hitherto known notions of aesthetics and ethics. When applied to literature, aesthetic does not simply mean ‘beautiful’ in the text. It refers to an intricate relationship between feelings and perception and also incorporates within its range wide-ranging emotional reactions to text. The term aesthetics in literature is connected with the readers whose critical responses to the text determine the merit of any work to be really a piece of art. Aestheticism is the child of ethics. Morality sets the grounds for the production of any work and the idea of aesthetics gives it transcendence.Keywords: ethics, aesthetics and hedonistic aesthetic, nymphet syndrome, pedophilia
Procedia PDF Downloads 1584429 Predictors, Barriers, and Facilitators to Refugee Women’s Employment and Economic Inclusion: A Mixed Methods Systematic Review
Authors: Areej Al-Hamad, Yasin Yasin, Kateryna Metersky
Abstract:
This mixed-method systematic review and meta-analysis provide an encompassing understanding of the barriers, facilitators, and predictors of refugee women's employment and economic inclusion. The study sheds light on the complex interplay of sociocultural, personal, political, and environmental factors influencing these outcomes, underlining the urgent need for a multifaceted, tailored approach to devising strategies, policies, and interventions aimed at boosting refugee women's economic empowerment. Our findings suggest that sociocultural factors, including gender norms, societal attitudes, language proficiency, and social networks, profoundly shape refugee women's access to and participation in the labor market. Personal factors such as age, educational attainment, health status, skills, and previous work experience also play significant roles. Political factors like immigration policies, regulations, and rights to work, alongside environmental factors like labor market conditions, availability of employment opportunities, and access to resources and support services, further contribute to the complex dynamics influencing refugee women's economic inclusion. The significant variability observed in the impacts of these factors across different contexts underscores the necessity of adopting population and region-specific strategies. A one-size-fits-all approach may prove to be ineffective due to the diversity and unique circumstances of refugee women across different geographical, cultural, and political contexts. The study's findings have profound implications for policy-making, practice, education, and research. The insights garnered a call for coordinated efforts across these domains to bolster refugee women's economic participation. In policy-making, the findings necessitate a reassessment of current immigration and labor market policies to ensure they adequately support refugee women's employment and economic integration. In practice, they highlight the need for comprehensive, tailored employment services and interventions that address the specific barriers and leverage the facilitators identified. In education, they underline the importance of language and skills training programs that cater to the unique needs and circumstances of refugee women. Lastly, in research, they emphasize the need for ongoing investigations into the multifaceted factors influencing refugee women's employment experiences, allowing for continuous refinement of our understanding and interventions. Through this comprehensive exploration, the study contributes to ongoing efforts aimed at creating more inclusive, equitable societies. By continually refining our understanding of the complex factors influencing refugee women's employment experiences, we can pave the way toward enhanced economic empowerment for this vulnerable population.Keywords: refugee women, employment barriers, systematic review, employment facilitators
Procedia PDF Downloads 804428 Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1
Authors: Kaoutar Lamrini Uahabi, Mohamed Atounti
Abstract:
In the present work, we consider one category of curves denoted by L(p, k, r, n). These curves are continuous arcs which are trajectories of roots of the trinomial equation zn = αzk + (1 − α), where z is a complex number, n and k are two integers such that 1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting by L the union of all trinomial curves L(p, k, r, n) and using the box counting dimension as fractal dimension, we will prove that the dimension of L is equal to 3/2.Keywords: feasible angles, fractal dimension, Minkowski sausage, trinomial curves, trinomial equation
Procedia PDF Downloads 1894427 Navigating Complex Communication Dynamics in Qualitative Research
Authors: Kimberly M. Cacciato, Steven J. Singer, Allison R. Shapiro, Julianna F. Kamenakis
Abstract:
This study examines the dynamics of communication among researchers and participants who have various levels of hearing, use multiple languages, have various disabilities, and who come from different social strata. This qualitative methodological study focuses on the strategies employed in an ethnographic research study examining the communication choices of six sets of parents who have Deaf-Disabled children. The participating families varied in their communication strategies and preferences including the use of American Sign Language (ASL), visual-gestural communication, multiple spoken languages, and pidgin forms of each of these. The research team consisted of two undergraduate students proficient in ASL and a Deaf principal investigator (PI) who uses ASL and speech as his main modes of communication. A third Hard-of-Hearing undergraduate student fluent in ASL served as an objective facilitator of the data analysis. The team created reflexive journals by audio recording, free writing, and responding to team-generated prompts. They discussed interactions between the members of the research team, their evolving relationships, and various social and linguistic power differentials. The researchers reflected on communication during data collection, their experiences with one another, and their experiences with the participating families. Reflexive journals totaled over 150 pages. The outside research assistant reviewed the journals and developed follow up open-ended questions and prods to further enrich the data. The PI and outside research assistant used NVivo qualitative research software to conduct open inductive coding of the data. They chunked the data individually into broad categories through multiple readings and recognized recurring concepts. They compared their categories, discussed them, and decided which they would develop. The researchers continued to read, reduce, and define the categories until they were able to develop themes from the data. The research team found that the various communication backgrounds and skills present greatly influenced the dynamics between the members of the research team and with the participants of the study. Specifically, the following themes emerged: (1) students as communication facilitators and interpreters as barriers to natural interaction, (2) varied language use simultaneously complicated and enriched data collection, and (3) ASL proficiency and professional position resulted in a social hierarchy among researchers and participants. In the discussion, the researchers reflected on their backgrounds and internal biases of analyzing the data found and how social norms or expectations affected the perceptions of the researchers in writing their journals. Through this study, the research team found that communication and language skills require significant consideration when working with multiple and complex communication modes. The researchers had to continually assess and adjust their data collection methods to meet the communication needs of the team members and participants. In doing so, the researchers aimed to create an accessible research setting that yielded rich data but learned that this often required compromises from one or more of the research constituents.Keywords: American Sign Language, complex communication, deaf-disabled, methodology
Procedia PDF Downloads 1184426 Growth of Droplet in Radiation-Induced Plasma of Own Vapour
Authors: P. Selyshchev
Abstract:
The theoretical approach is developed to describe the change of drops in the atmosphere of own steam and buffer gas under irradiation. It is shown that the irradiation influences on size of stable droplet and on the conditions under which the droplet exists. Under irradiation the change of drop becomes more complex: the not monotone and periodical change of size of drop becomes possible. All possible solutions are represented by means of phase portrait. It is found all qualitatively different phase portraits as function of critical parameters: rate generation of clusters and substance density.Keywords: irradiation, steam, plasma, cluster formation, liquid droplets, evolution
Procedia PDF Downloads 4414425 Heat Transfer of an Impinging Jet on a Plane Surface
Authors: Jian-Jun Shu
Abstract:
A cold, thin film of liquid impinging on an isothermal hot, horizontal surface has been investigated. An approximate solution for the velocity and temperature distributions in the flow along the horizontal surface is developed, which exploits the hydrodynamic similarity solution for thin film flow. The approximate solution may provide a valuable basis for assessing flow and heat transfer in more complex settings.Keywords: flux, free impinging jet, solid-surface, uniform wall temperature
Procedia PDF Downloads 4794424 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 994423 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.Keywords: mathematical sciences, data analytics, advances, unveiling
Procedia PDF Downloads 934422 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 1364421 A Mixed-Method Exploration of the Interrelationship between Corporate Governance and Firm Performance
Authors: Chen Xiatong
Abstract:
The study aims to explore the interrelationship between corporate governance factors and firm performance in Mainland China using a mixed-method approach. To clarify the current effectiveness of corporate governance, uncover the complex interrelationships between governance factors and firm performance, and enhance understanding of corporate governance strategies in Mainland China. The research involves quantitative methods like statistical analysis of governance factors and firm performance data, as well as qualitative approaches including policy research, case studies, and interviews with staff members. The study aims to reveal the current effectiveness of corporate governance in Mainland China, identify complex interrelationships between governance factors and firm performance, and provide suggestions for companies to enhance their governance practices. The research contributes to enriching the literature on corporate governance by providing insights into the effectiveness of governance practices in Mainland China and offering suggestions for improvement. Quantitative data will be gathered through surveys and sampling methods, focusing on governance factors and firm performance indicators. Qualitative data will be collected through policy research, case studies, and interviews with staff members. Quantitative data will be analyzed using statistical, mathematical, and computational techniques. Qualitative data will be analyzed through thematic analysis and interpretation of policy documents, case study findings, and interview responses. The study addresses the effectiveness of corporate governance in Mainland China, the interrelationship between governance factors and firm performance, and staff members' perceptions of corporate governance strategies. The research aims to enhance understanding of corporate governance effectiveness, enrich the literature on governance practices, and contribute to the field of business management and human resources management in Mainland China.Keywords: corporate governance, business management, human resources management, board of directors
Procedia PDF Downloads 554420 Optical Properties of Tetrahydrofuran Clathrate Hydrates at Terahertz Frequencies
Authors: Hyery Kang, Dong-Yeun Koh, Yun-Ho Ahn, Huen Lee
Abstract:
Terahertz time-domain spectroscopy (THz-TDS) was used to observe the THF clathrate hydrate system with dosage of polyvinylpyrrolidone (PVP) with three different average molecular weights (10,000 g/mol, 40,000 g/mol, 360,000 g/mol). Distinct footprints of phase transition in the THz region (0.4 - 2.2 THz) were analyzed and absorption coefficients and complex refractive indices are obtained and compared in the temperature range of 253 K to 288 K. Along with the optical properties, ring breathing and stretching modes for different molecular weights of PVP in THF hydrate are analyzed by Raman spectroscopy.Keywords: clathrate hydrate, terahertz, polyvinylpyrrolidone (PVP), THz-TDS, inhibitor
Procedia PDF Downloads 3794419 On the Internal Structure of the ‘Enigmatic Electrons’
Authors: Natarajan Tirupattur Srinivasan
Abstract:
Quantum mechanics( QM) and (special) relativity (SR) have indeed revolutionized the very thinking of physicists, and the spectacular successes achieved over a century due to these two theories are mind-boggling. However, there is still a strong disquiet among some physicists. While the mathematical structure of these two theories has been established beyond any doubt, their physical interpretations are still being contested by many. Even after a hundred years of their existence, we cannot answer a very simple question, “What is an electron”? Physicists are struggling even now to come to grips with the different interpretations of quantum mechanics with all their ramifications. However, it is indeed strange that the (special) relativity theory of Einstein enjoys many orders of magnitude of “acceptance”, though both theories have their own stocks of weirdness in the results, like time dilation, mass increase with velocity, the collapse of the wave function, quantum jump, tunnelling, etc. Here, in this paper, it would be shown that by postulating an intrinsic internal motion to these enigmatic electrons, one can build a fairly consistent picture of reality, revealing a very simple picture of nature. This is also evidenced by Schrodinger’s ‘Zitterbewegung’ motion, about which so much has been written. This leads to a helical trajectory of electrons when they move in a laboratory frame. It will be shown that the helix is a three-dimensional wave having all the characteristics of our familiar 2D wave. Again, the helix, being a geodesic on an imaginary cylinder, supports ‘quantization’, and its representation is just the complex exponentials matching with the wave function of quantum mechanics. By postulating the instantaneous velocity of the electrons to be always ‘c’, the velocity of light, the entire relativity comes alive, and we can interpret the ‘time dilation’, ‘mass increase with velocity’, etc., in a very simple way. Thus, this model unifies both QM and SR without the need for a counterintuitive postulate of Einstein about the constancy of the velocity of light for all inertial observers. After all, if the motion of an inertial frame cannot affect the velocity of light, the converse that this constant also cannot affect the events in the frame must be true. But entire relativity is about how ‘c’ affects time, length, mass, etc., in different frames.Keywords: quantum reconstruction, special theory of relativity, quantum mechanics, zitterbewegung, complex wave function, helix, geodesic, Schrodinger’s wave equations
Procedia PDF Downloads 734418 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm
Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam
Abstract:
The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction
Procedia PDF Downloads 1394417 Petrograpgy and Major Elements Chemistry of Granitic rocks of the Nagar Parkar Igneous Complex, Tharparkar, Sindh
Authors: Amanullah Lagharil, Majid Ali Laghari, M. Qasim, Jan. M., Asif Khan, M. Hassan Agheem
Abstract:
The Nagar Parkar area in southeastern Sindh is a part of the Thar Desert adjacent to the Runn of Kutchh, and covers 480 km2. It contains exposures of a variety of igneous rocks referred to as the Nagar Parkar Igneous Complex. The complex comprises rocks belonging to at least six phases of magmatism, from oldest to youngest: 1) amphibolitic basement rocks, 2) riebeckite-aegirine grey granite, 3) biotite-hornblende pink granite, 4) acid dykes, 5) rhyolite “plugs”, and basic dykes (Jan et al., 1997). The last three of these are not significant in volume. Radiometric dates are lacking but the grey and pink granites are petrographically comparable to the Siwana and Jalore plutons, respectively, emplaced in the Malani volcanic series. Based on these similarities and proximity, the phase 2 to 6 bodies in the Nagar Parkar may belong to the Late Proterozoic (720–745 Ma) Malani magmatism that covers large areas in western Rajasthan. Khan et al. (2007) have reported a 745 ±30 – 755 ±22 Ma U-Th-Pb age on monazite from the pink granite. The grey granite is essentially composed of perthitic feldspar (microperthite, mesoperthite), quartz, small amount of plagioclase and, characteristically, sodic minerals such as riebeckite and aegirine. A few samples lack aegirine. Fe-Ti oxide and minute, well-developed crystals of zircon occur in almost all the studied samples. Tourmaline, fluorite, apatite and rutile occur in only some samples and astrophyllite is rare. Allanite, sphene and leucoxene occur as minor accessories along with local epidote. The pink granite is mostly leucocratic, but locally rich in biotite (up to 7 %). It is essentially made up of microperthite and quartz, with local microcline, and minor plagioclase (albite-oligoclase). Some rocks contain sufficient oligoclase and can be called adamellite or quartz mozonite. Biotite and hornblende are main accessory minerals along with iron oxide, but in a few samples are without hornblende. Fayalitic olivine, zircon, sphene, apatite, tourmaline, fluorite, allanite and cassiterite occur as sporadic accessory minerals. Epidote, carbonate, sericite and muscovite are produced due to the alteration of feldspar. This work concerns the major element geochemistry and comparison of the principal granitic rocks of Nagar Parkar. According to the scheme of De La Roche et al. (1980), majority of the grey and pink granites classify as alkali granite, 20 % as granite and 10 % as granodiorite. When evaluated on the basis of Shand's indices (after Maniar and Piccoli, 1989), the grey and pink granites span all three fields (peralkaline, metaluminous and peraluminous). Of the analysed grey granites, 67 % classify as peralkaline, 20 % as peraluminous and 10 % as metaluminous, while 50 % of pink granites classify as peralkaline, 30 % metaluminous and 20 % peraluminous.Keywords: petrography, nagar parker, granites, geological sciences
Procedia PDF Downloads 4584416 The Origins of Representations: Cognitive and Brain Development
Authors: Athanasios Raftopoulos
Abstract:
In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.Keywords: mental representations, iconic representations, symbols, human evolution
Procedia PDF Downloads 574415 Producing Sustained Renewable Energy and Removing Organic Pollutants from Distillery Wastewater using Consortium of Sludge Microbes
Authors: Anubha Kaushik, Raman Preet
Abstract:
Distillery wastewater in the form of spent wash is a complex and strong industrial effluent, with high load of organic pollutants that may deplete dissolved oxygen on being discharged into aquatic systems and contaminate groundwater by leaching of pollutants, while untreated spent wash disposed on land acidifies the soil. Stringent legislative measures have therefore been framed in different countries for discharge standards of distillery effluent. Utilising the organic pollutants present in various types of wastes as food by mixed microbial populations is emerging as an eco-friendly approach in the recent years, in which complex organic matter is converted into simpler forms, and simultaneously useful gases are produced as renewable and clean energy sources. In the present study, wastewater from a rice bran based distillery has been used as the substrate in a dark fermenter, and native microbial consortium from the digester sludge has been used as the inoculum to treat the wastewater and produce hydrogen. After optimising the operational conditions in batch reactors, sequential batch mode and continuous flow stirred tank reactors were used to study the best operational conditions for enhanced and sustained hydrogen production and removal of pollutants. Since the rate of hydrogen production by the microbial consortium during dark fermentation is influenced by concentration of organic matter, pH and temperature, these operational conditions were optimised in batch mode studies. Maximum hydrogen production rate (347.87ml/L/d) was attained in 32h dark fermentation while a good proportion of COD also got removed from the wastewater. Slightly acidic initial pH seemed to favor biohydrogen production. In continuous stirred tank reactor, high H2 production from distillery wastewater was obtained from a relatively shorter substrate retention time (SRT) of 48h and a moderate organic loading rate (OLR) of 172 g/l/d COD.Keywords: distillery wastewater, hydrogen, microbial consortium, organic pollution, sludge
Procedia PDF Downloads 2774414 Evaluation of the Spatial Regulation of Hydrogen Sulphide Producing Enzymes in the Placenta during Labour
Authors: F. Saleh, F. Lyall, A. Abdulsid, L. Marks
Abstract:
Background: Labour in human is a complex biological process that involves interactions of neurological, hormonal and inflammatory pathways, with the placenta being a key regulator of these pathways. It is known that uterine contractions and labour pain cause physiological changes in gene expression in maternal and fetal blood, and in placenta during labour. Oxidative and inflammatory stress pathways are implicated in labour and they may cause alteration of placental gene expression. Additionally, in placental tissues, labour increases the expression of genes involved in placental oxidative stress, inflammatory cytokines, angiogenic regulators and apoptosis. Recently, Hydrogen Sulphide (H2S) has been considered as an endogenous gaseous mediator which promotes vasodilation and exhibits cytoprotective anti-inflammatory properties. The endogenous H2S is synthesised predominantly by two enzymes: cystathionine β-synthase (CBS) and cystathionine γ-lyase (CSE). As the H2S pathway has anti-oxidative and anti-inflammatory characteristics thus, we hypothesised that the expression of CBS and CSE in placental tissues would alter during labour. Methods: CBS and CSE expressions were examined in placentas using western blotting and RT-PCR in inner, middle and outer placental zones in placentas obtained from healthy non labouring women who delivered by caesarian section. These were compared with the equivalent zone of placentas obtained from women who had uncomplicated labour and delivered vaginally. Results: No differences in CBS and CSE mRNA or protein levels were found between the different sites within placentas in either the labour or non-labour group. There were no significant differences in either CBS or CSE expression between the two groups at the inner site and middle site. However, at the outer site there was a highly significant decrease in CBS protein expression in the labour group when compared to the non-labour group (p = 0.002). Conclusion: To the best of author’s knowledge, this is the first report to suggest that, CBS is expressed in a spatial manner within the human placenta. Further work is needed to clarify the precise function and mechanism of this spatial regulation although it is likely that inflammatory pathways regulation is a complex process in which this plays a role.Keywords: anti-inflammatory, hydrogen sulphide, labour, oxidative stress
Procedia PDF Downloads 2434413 Upconversion Nanoparticle-Mediated Carbon Monoxide Prodrug Delivery System for Cancer Therapy
Authors: Yaw Opoku-Damoah, Run Zhang, Hang Thu Ta, Zhi Ping Xu
Abstract:
Gas therapy is still at an early stage of research and development. Even though most gasotransmitters have proven their therapeutic potential, their handling, delivery, and controlled release have been extremely challenging. This research work employs a versatile nanosystem that is capable of delivering a gasotransmitter in the form of a photo-responsive carbon monoxide-releasing molecule (CORM) for targeted cancer therapy. The therapeutic action was mediated by upconversion nanoparticles (UCNPs) designed to transfer bio-friendly low energy near-infrared (NIR) light to ultraviolet (UV) light capable of triggering carbon monoxide (CO) from a water-soluble amphiphilic manganese carbonyl complex CORM incorporated into a carefully designed lipid drug delivery system. Herein, gaseous CO that plays a role as a gasotransmitter with cytotoxic and homeostatic properties was investigated to instigate cellular apoptosis. After successfully synthesizing the drug delivery system, the ability of the system to encapsulate and mediate the sustained release of CO after light excitation was demonstrated. CO fluorescence probe (COFP) was successfully employed to determine the in vitro drug release profile upon NIR light irradiation. The uptake of nanoparticles enhanced by folates and its receptor interaction was also studied for cellular uptake purposes. The anticancer potential of the final lipid nanoparticle Lipid/UCNPs/CORM/FA (LUCF) was also determined by cell viability assay. Intracellular CO release and a subsequent therapeutic action involving ROS production, mitochondrial damage, and CO production was also evaluated. In all, this current project aims to use in vitro studies to determine the potency and efficiency of a NIR-mediated CORM prodrug delivery system.Keywords: carbon monoxide-releasing molecule, upconversion nanoparticles, site-specific delivery, amphiphilic manganese carbonyl complex, prodrug delivery system.
Procedia PDF Downloads 1124412 Dynamic Mechanical Analysis of Supercooled Water in Nanoporous Confinement and Biological Systems
Authors: Viktor Soprunyuk, Wilfried Schranz, Patrick Huber
Abstract:
In the present work, we show that Dynamic Mechanical Analysis (DMA) with a measurement frequency range f= 0.2 - 100 Hz is a rather powerful technique for the study of phase transitions (freezing and melting) and glass transitions of water in geometrical confinement. Inserting water into nanoporous host matrices, like e.g. Gelsil (size of pores 2.6 nm and 5 nm) or Vycor (size of pores 10 nm) allows one to study size effects occurring at the nanoscale conveniently in macroscopic bulk samples. One obtains valuable insight concerning confinement induced changes of the dynamics by measuring the temperature and frequency dependencies of the complex Young's modulus Y* for various pore sizes. Solid-liquid transitions or glass-liquid transitions show up in a softening or the real part Y' of the complex Young's modulus, yet with completely different frequency dependencies. Analysing the frequency dependent imaginary part of the Young´s modulus in the glass transition regions for different pore sizes we find a clear-cut 1/d-dependence of the calculated glass transition temperatures which extrapolates to Tg(1/d=0)=136 K, in agreement with the traditional value of water. The results indicate that the main role of the pore diameter is the relative amount of water molecules that are near an interface within a length scale of the order of the dynamic correlation length x. Thus we argue that the observed strong pore size dependence of Tg is an interfacial effect, rather than a finite size effect. We obtained similar signatures of Y* near glass transitions in different biological objects (fruits, vegetables, and bread). The values of the activation energies for these biological materials in the region of glass transition are quite similar to the values of the activation energies of supercooled water in the nanoporous confinement in this region. The present work was supported by the Austrian Science Fund (FWF, project Nr. P 28672 – N36).Keywords: biological systems, liquids, glasses, amorphous systems, nanoporous materials, phase transition
Procedia PDF Downloads 2384411 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 1484410 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 754409 The Role of Hypothalamus Mediators in Energy Imbalance
Authors: Maftunakhon Latipova, Feruza Khaydarova
Abstract:
Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.Keywords: obesity, endocrinology, hypothalamus, overweight
Procedia PDF Downloads 76