Search results for: syntactic complexity
425 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL
Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara
Abstract:
PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.Keywords: cognition, database, PostgreSQL, text-editor, visual-editor
Procedia PDF Downloads 282424 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces
Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek
Abstract:
Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.Keywords: moisture, mold growth, testing, wood
Procedia PDF Downloads 129423 The MicroRNA-2110 Suppressed Cell Proliferation and Migration Capacity in Hepatocellular Carcinoma Cells
Authors: Pelin Balcik Ercin
Abstract:
Introduction: ZEB transcription factor family member ZEB2, has a role in epithelial to mesenchymal transition during development and metastasis. The altered circulating extracellular miRNAs expression is observed in diseases, and extracellular miRNAs have an important role in cancer cell microenvironment. In ChIP-Seq study, the expression of miR-2110 was found to be regulated by ZEB2. In this study, the effects of miR2110 on cell proliferation and migration of hepatocellular carcinoma (HCC) cells were examined. Material and Methods: SNU398 cells transfected with mimic miR2110 (20nM) (HMI0375, Sigma-Aldrich) and negative control miR (HMC0002, Sigma-Aldrich). MicroRNA isolation was accomplished with miRVANA isolation kit according to manufacturer instructions. cDNA synthesis was performed expression, respectively, and calibrated with Ct of controls. The real-time quantitative PCR (RT-qPCR) reaction was performed using the TaqMan Fast Advanced Master Mix (Thermo Sci.). Ct values of miR2110 were normalized to miR-186-5p and miR16-5p for the intracellular gene. Cell proliferation analysis was analyzed with the xCELLigence RTCA System. Wound healing assay was analyzed with the ImageJ program and relative fold change calculated. Results: The mimic-miR-2110 transfected SNU398 cells nearly nine-fold (log2) more miR-2110 expressed compared to negative control transfected cells. The mimic-miR-2110 transfected HCC cell proliferation significantly inhibited compared to the negative control cells. Furthermore, miR-2110-SNU398 cell migration capacity was relatively four-fold decreased compared to negative control-miR-SNU398 cells. Conclusion: Our results suggest the miR-2110 inhibited cell proliferation and also miR-2110 negatively affect cell migration compared to control groups in HCC cells. These data suggest the complexity of microRNA EMT transcription factors regulation. These initial results are pointed out the predictive biomarker capacity of miR-2110 in HCC.Keywords: epithelial to mesenchymal transition, EMT, hepatocellular carcinoma cells, micro-RNA-2110, ZEB2
Procedia PDF Downloads 123422 Women and Terrorism in Nigeria: Policy Templates for Addressing Complex Challenges in a Changing Democratic State
Authors: Godiya Pius Atsiya
Abstract:
One of the most devastating impacts of terrorism on the Nigerian state is the danger it has posed on women, children and other vulnerable groups. The complexity of terrorism in Nigeria, especially in most parts of Northern Nigeria has entrenched unprecedented security challenges such as refugee crisis, kidnapping, food shortages, increase in death tolls, malnutrition, fear, rape and several other psychological factors. Of particular interest in this paper as it relates to terrorism is the high rate of Internally Displaced Persons(IDPs), with women, children and the aged being the most affected. Empirical evidence arising from recent development in Nigeria’s North-East geo-political zone shows that large numbers of refugees fleeing the Boko Haram attacks have doubled. The attendant consequences of this mass exodus of people in the affected areas are that the victims now suffer untold and unwarranted economic hardship. In another dimension, recent findings have it that most powerless women and young teenage girls have been forcefully conscripted into the Islamic extremist groups and used as shields. In some respect, these groups of people have been used as available tools for suicide bombing and other criminal tendencies, the result of which can be detrimental to social cohesion and integration. This work is a theoretical insight into terrorism discourses; hence, the paper relies on existing works of scholars in carrying out the research. The paper argues that the implications of terrorism on women gender have grounding effects on the moral psyche of women who are supposed to be home managers and custodians of morality in society. The burden of terrorism and all it tends to propagate has literally upturned social lives and hence, Nigeria is gradually being plunged into the Hobesian state of nature. As a panacea to resolving this social malaise, the paper submits that government and indeed, all stakeholders in the nation’s democratic project must expedite action to nip this trend in the bud. The paper sums up with conclusion and other alternative policy measures to mitigate the challenges of terrorism in Nigeria.Keywords: changing democratic state, policy measures, terrorism, women
Procedia PDF Downloads 230421 Dosimetry in Interventional Radiology Examinations for Occupational Exposure Monitoring
Authors: Ava Zarif Sanayei, Sedigheh Sina
Abstract:
Interventional radiology (IR) uses imaging guidance, including X-rays and CT scans, to deliver therapy precisely. Most IR procedures are performed under local anesthesia and start with a small needle being inserted through the skin, which may be called pinhole surgery or image-guided surgery. There is increasing concern about radiation exposure during interventional radiology procedures due to procedure complexity. The basic aim of optimizing radiation protection as outlined in ICRP 139, is to strike a balance between image quality and radiation dose while maximizing benefits, ensuring that diagnostic interpretation is satisfactory. This study aims to estimate the equivalent doses to the main trunk of the body for the Interventional radiologist and Superintendent using LiF: Mg, Ti (TLD-100) chips at the IR department of a hospital in Shiraz, Iran. In the initial stage, the dosimeters were calibrated with the use of various phantoms. Afterward, a group of dosimeters was prepared, following which they were used for three months. To measure the personal equivalent dose to the body, three TLD chips were put in a tissue-equivalent batch and used under a protective lead apron. After the completion of the duration, TLDs were read out by a TLD reader. The results revealed that these individuals received equivalent doses of 387.39 and 145.11 µSv, respectively. The findings of this investigation revealed that the total radiation exposure to the staff was less than the annual limit of occupational exposure. However, it's imperative to implement appropriate radiation protection measures. Although the dose received by the interventional radiologist is a bit noticeable, it may be due to the reason for using conventional equipment with over-couch x-ray tubes for interventional procedures. It is therefore important to use dedicated equipment and protective means such as glasses and screens whenever compatible with the intervention when they are available or have them fitted to equipment if they are not present. Based on the results, the placement of staff in an appropriate location led to increasing the dose to the radiologist. Manufacturing and installation of moveable lead curtains with a thickness of 0.25 millimeters can effectively minimize the radiation dose to the body. Providing adequate training on radiation safety principles, particularly for technologists, can be an optimal approach to further decreasing exposure.Keywords: interventional radiology, personal monitoring, radiation protection, thermoluminescence dosimetry
Procedia PDF Downloads 61420 Effectiveness with Respect to Time-To-Market and the Impacts of Late-Stage Design Changes in Rapid Development Life Cycles
Authors: Parth Shah
Abstract:
The author examines the recent trend where business organizations are significantly reducing their developmental cycle times to stay competitive in today’s global marketspace. The author proposes a rapid systems engineering framework to address late design changes and allow for flexibility (i.e. to react to unexpected or late changes and its impacts) during the product development cycle using a Systems Engineering approach. A System Engineering approach is crucial in today’s product development to deliver complex products into the marketplace. Design changes can occur due to shortened timelines and also based on initial consumer feedback once a product or service is in the marketplace. The ability to react to change and address customer expectations in a responsive and cost-efficient manner is crucial for any organization to succeed. Past literature, research, and methods such as concurrent development, simultaneous engineering, knowledge management, component sharing, rapid product integration, tailored systems engineering processes, and studies on reducing product development cycles all suggest a research gap exist in specifically addressing late design changes due to the shortening of life cycle environments in increasingly competitive markets. The author’s research suggests that 1) product development cycles time scales are now measured in months instead of years, 2) more and more products have interdepended systems and environments that are fast-paced and resource critical, 3) product obsolesce is higher and more organizations are releasing products and services frequently, and 4) increasingly competitive markets are leading to customization based on consumer feedback. The author will quantify effectiveness with respect to success factors such as time-to-market, return-of-investment, life cycle time and flexibility in late design changes by complexity of product or service, number of late changes and ability to react and reduce late design changes.Keywords: product development, rapid systems engineering, scalability, systems engineering, systems integration, systems life cycle
Procedia PDF Downloads 203419 SynKit: A Event-Driven and Scalable Microservices-Based Kitting System
Authors: Bruno Nascimento, Cristina Wanzeller, Jorge Silva, João A. Dias, André Barbosa, José Ribeiro
Abstract:
The increasing complexity of logistics operations stems from evolving business needs, such as the shift from mass production to mass customization, which demands greater efficiency and flexibility. In response, Industry 4.0 and 5.0 technologies provide improved solutions to enhance operational agility and better meet market demands. The management of kitting zones, combined with the use of Autonomous Mobile Robots, faces challenges related to coordination, resource optimization, and rapid response to customer demand fluctuations. Additionally, implementing lean manufacturing practices in this context must be carefully orchestrated by intelligent systems and human operators to maximize efficiency without sacrificing the agility required in an advanced production environment. This paper proposes and implements a microservices-based architecture integrating principles from Industry 4.0 and 5.0 with lean manufacturing practices. The architecture enhances communication and coordination between autonomous vehicles and kitting management systems, allowing more efficient resource utilization and increased scalability. The proposed architecture focuses on the modularity and flexibility of operations, enabling seamless flexibility to change demands and the efficient allocation of resources in realtime. Conducting this approach is expected to significantly improve logistics operations’ efficiency and scalability by reducing waste and optimizing resource use while improving responsiveness to demand changes. The implementation of this architecture provides a robust foundation for the continuous evolution of kitting management and process optimization. It is designed to adapt to dynamic environments marked by rapid shifts in production demands and real-time decision-making. It also ensures seamless integration with automated systems, aligning with Industry 4.0 and 5.0 needs while reinforcing Lean Manufacturing principles.Keywords: microservices, event-driven, kitting, AMR, lean manufacturing, industry 4.0, industry 5.0
Procedia PDF Downloads 20418 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System
Authors: Nicolas M. Beleski, Gustavo A. G. Lugo
Abstract:
Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind
Procedia PDF Downloads 130417 The Basin Management Methodology for Integrated Water Resources Management and Development
Authors: Julio Jesus Salazar, Max Jesus De Lama
Abstract:
The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach
Procedia PDF Downloads 149416 An Investigative Study into Good Governance in the Non-Profit Sector in South Africa: A Systems Approach Perspective
Authors: Frederick M. Dumisani Xaba, Nokuthula G. Khanyile
Abstract:
There is a growing demand for greater accountability, transparency and ethical conduct based on sound governance principles in the developing world. Funders, donors and sponsors are increasingly demanding more transparency, better value for money and adherence to good governance standards. The drive towards improved governance measures is largely influenced by the need to ‘plug the leaks’, deal with malfeasance, engender greater levels of accountability and good governance and to ultimately attract further funding or investment. This is the case with the Non-Profit Organizations (NPOs) in South Africa in general, and in the province of KwaZulu-Natal in particular. The paper draws from the good governance theory, stakeholder theory and systems thinking to critically examine the requirements for good governance for the NPO sector from a theoretical and legislative point and to systematically looks at the contours of governance currently among the NPOs. The paper did this through the rigorous examination of the vignettes of cases of governance among selected NPOs based in KwaZulu-Natal. The study used qualitative and quantitative research methodologies through document analysis, literature review, semi-structured interviews, focus groups and statistical analysis from the various primary and secondary sources. It found some good cases of good governance but also found frightening levels of poor governance. There was an exponential growth of NPOs registered during the period under review, equally so there was an increase in cases of non-compliance to good governance practices. NPOs operate in an increasingly complex environment. There is contestation for influence and access to resources. Stakeholder management is poorly conceptualized and executed. Recognizing that the NPO sector operates in an environment characterized by complexity, constant changes, unpredictability, contestation, diversity and divergent views of different stakeholders, there is a need to apply legislative and systems thinking approaches to strengthen governance to withstand this turbulence through a capacity development model that recognizes these contextual and environmental challenges.Keywords: good governance, non-profit organizations, stakeholder theory, systems theory
Procedia PDF Downloads 120415 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion
Authors: Radim Sip, Denisa Denglerova
Abstract:
It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion
Procedia PDF Downloads 146414 New Insights into Ethylene and Auxin Interplay during Tomato Ripening
Authors: Bruna Lima Gomes, Vanessa Caroline De Barros Bonato, Luciano Freschi, Eduardo Purgatto
Abstract:
Plant hormones are long known to be tightly associated with fruit development and are involved in controlling various aspects of fruit ripening. For fleshy fruits, ripening is characterized for changes in texture, color, aroma and other parameters that markedly contribute to its quality. Ethylene is one of the major players regulating the ripening-related processes, but emerging evidences suggest that auxin is also part of this dynamic control. Thus, the aim of this study was providing new insights into the auxin role during ripening and the hormonal interplay between auxin and ethylene. For that, tomato fruits (Micro-Tom) were collected at mature green stage and separated in four groups: one for indole-3-acetic acid (IAA) treatment, one for ethylene, one for a combination of IAA and ethylene, and one for control. Hormone solution was injected through the stylar apex, while mock samples were injected with buffer only. For ethylene treatments, fruits were exposed to gaseous hormone. Then, fruits were left to ripen under standard conditions and to assess ripening development, hue angle was reported as color indicator and ethylene production was measured by gas chromatography. The transcript levels of three ripening-related ethylene receptors (LeETR3, LeETR4 and LeETR6) were evaluated by RT-qPCR. Results showed that ethylene treatment induced ripening, stimulated ethylene production, accelerated color changes and induced receptor expression, as expected. Nonetheless, auxin treatment showed the opposite effect once fruits remained green for longer time than control group and ethylene perception has changed, taking account the reduced levels of receptor transcripts. Further, treatment with both hormones revealed that auxin effect in delaying ripening was predominant, even with higher levels of ethylene. Altogether, the data suggest that auxin modulates several aspects of the tomato fruit ripening modifying the ethylene perception. The knowledge about hormonal control of fruit development will help design new strategies for effective manipulation of ripening regarding fruit quality and brings a new level of complexity on fruit ripening regulation.Keywords: ethylene, auxin, fruit ripening, hormonal crosstalk
Procedia PDF Downloads 458413 A Case Study on an Integrated Analysis of Well Control and Blow out Accident
Authors: Yasir Memon
Abstract:
The complexity and challenges in the offshore industry are increasing more than the past. The oil and gas industry is expanding every day by accomplishing these challenges. More challenging wells such as longer and deeper are being drilled in today’s environment. Blowout prevention phenomena hold a worthy importance in oil and gas biosphere. In recent, so many past years when the oil and gas industry was growing drilling operation were extremely dangerous. There was none technology to determine the pressure of reservoir and drilling hence was blind operation. A blowout arises when an uncontrolled reservoir pressure enters in wellbore. A potential of blowout in the oil industry is the danger for the both environment and the human life. Environmental damage, state/country regulators, and the capital investment causes in loss. There are many cases of blowout in the oil the gas industry caused damage to both human and the environment. A huge capital investment is being in used to stop happening of blowout through all over the biosphere to bring damage at the lowest level. The objective of this study is to promote safety and good resources to assure safety and environmental integrity in all operations during drilling. This study shows that human errors and management failure is the main cause of blowout therefore proper management with the wise use of precautions, prevention methods or controlling techniques can reduce the probability of blowout to a minimum level. It also discusses basic procedures, concepts and equipment involved in well control methods and various steps using at various conditions. Furthermore, another aim of this study work is to highlight management role in oil gas operations. Moreover, this study analyze the causes of Blowout of Macondo well occurred in the Gulf of Mexico on April 20, 2010, and deliver the recommendations and analysis of various aspect of well control methods and also provides the list of mistakes and compromises that British Petroleum and its partner were making during drilling and well completion methods and also the Macondo well disaster happened due to various safety and development rules violation. This case study concludes that Macondo well blowout disaster could be avoided with proper management of their personnel’s and communication between them and by following safety rules/laws it could be brought to minimum environmental damage.Keywords: energy, environment, oil and gas industry, Macondo well accident
Procedia PDF Downloads 185412 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange
Authors: Fatemeh Rouhi, Hadi Nassiri
Abstract:
Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences
Procedia PDF Downloads 321411 Genotyping and Phylogeny of Phaeomoniella Genus Associated with Grapevine Trunk Diseases in Algeria
Authors: A. Berraf-Tebbal, Z. Bouznad, , A.J.L. Phillips
Abstract:
Phaeomoniella is a fungus genus in the mitosporic ascomycota which includes Phaeomoniella chlamydospora specie associated with two declining diseases on grapevine (Vitis vinifera) namely Petri disease and esca. Recent studies have shown that several Phaeomoniella species also cause disease on many other woody crops, such as forest trees and woody ornamentals. Two new species, Phaeomoniella zymoides and Phaeomoniella pinifoliorum H.B. Lee, J.Y. Park, R.C. Summerbell et H.S. Jung, were isolated from the needle surface of Pinus densiflora Sieb. et Zucc. in Korea. The identification of species in Phaeomoniella genus can be a difficult task if based solely on morphological and cultural characters. In this respect, the application of molecular methods, particularly PCR-based techniques, may provide an important contribution. MSP-PCR (microsatellite primed-PCR) fingerprinting has proven useful in the molecular typing of fungal strains. The high discriminatory potential of this method is particularly useful when dealing with closely related or cryptic species. In the present study, the application of PCR fingerprinting was performed using the micro satellite primer M13 for the purpose of species identification and strain typing of 84 Phaeomoniella -like isolates collected from grapevines with typical symptoms of dieback. The bands produced by MSP-PCR profiles divided the strains into 3 clusters and 5 singletons with a reproducibility level of 80%. Representative isolates from each group and, when possible, isolates from Eutypa dieback and esca symptoms were selected for sequencing of the ITS region. The ITS sequences for the 16 isolates selected from the MSP-PCR profiles were combined and aligned with sequences of 18 isolates retrieved from GenBank, representing a selection of all known Phaeomoniella species. DNA sequences were compared with those available in GenBank using Neighbor-joining (NJ) and Maximum-parsimony (MP) analyses. The phylogenetic trees of the ITS region revealed that the Phaeomoniella isolates clustered with Phaeomoniella chlamydospora reference sequences with a bootstrap support of 100 %. The complexity of the pathosystems vine-trunk diseases shows clearly the need to identify unambiguously the fungal component in order to allow a better understanding of the etiology of these diseases and justify the establishment of control strategies against these fungal agents.Keywords: Genotyping, MSP-PCR, ITS, phylogeny, trunk diseases
Procedia PDF Downloads 476410 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions
Authors: Varvara Roubtsova, Mohamed Chekired
Abstract:
Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics
Procedia PDF Downloads 300409 Building a Blockchain-based Internet of Things
Authors: Rob van den Dam
Abstract:
Today’s Internet of Things (IoT) comprises more than a billion intelligent devices, connected via wired/wireless communications. The expected proliferation of hundreds of billions more places us at the threshold of a transformation sweeping across the communications industry. Yet, we found that the IoT architecture and solutions that currently work for billions of devices won’t necessarily scale to tomorrow’s hundreds of billions of devices because of high cost, lack of privacy, not future-proof, lack of functional value and broken business models. As the IoT scales exponentially, decentralized networks have the potential to reduce infrastructure and maintenance costs to manufacturers. Decentralization also promises increased robustness by removing single points of failure that could exist in traditional centralized networks. By shifting the power in the network from the center to the edges, devices gain greater autonomy and can become points of transactions and economic value creation for owners and users. To validate the underlying technology vision, IBM jointly developed with Samsung Electronics the autonomous decentralized peer-to- peer proof-of-concept (PoC). The primary objective of this PoC was to establish a foundation on which to demonstrate several capabilities that are fundamental to building a decentralized IoT. Though many commercial systems in the future will exist as hybrid centralized-decentralized models, the PoC demonstrated a fully distributed proof. The PoC (a) validated the future vision for decentralized systems to extensively augment today’s centralized solutions, (b) demonstrated foundational IoT tasks without the use of centralized control, (c) proved that empowered devices can engage autonomously in marketplace transactions. The PoC opens the door for the communications and electronics industry to further explore the challenges and opportunities of potential hybrid models that can address the complexity and variety of requirements posed by the internet that continues to scale. Contents: (a) The new approach for an IoT that will be secure and scalable, (b) The three foundational technologies that are key for the future IoT, (c) The related business models and user experiences, (d) How such an IoT will create an 'Economy of Things', (e) The role of users, devices, and industries in the IoT future, (f) The winners in the IoT economy.Keywords: IoT, internet, wired, wireless
Procedia PDF Downloads 335408 Characterization of Complex Gold Ores for Preliminary Process Selection: The Case of Kapanda, Ibindi, Mawemeru, and Itumbi in Tanzania
Authors: Sospeter P. Maganga, Alphonce Wikedzi, Mussa D. Budeba, Samwel V. Manyele
Abstract:
This study characterizes complex gold ores (elemental and mineralogical composition, gold distribution, ore grindability, and mineral liberation) for preliminary process selection. About 200 kg of ore samples were collected from each location using systematic sampling by mass interval. Ores were dried, crushed, milled, and split into representative sub-samples (about 1 kg) for elemental and mineralogical composition analyses using X-ray fluorescence (XRF), fire assay finished with Atomic Absorption Spectrometer (AAS), and X-ray Diffraction (XRD) methods, respectively. The gold distribution was studied on size-by-size fractions, while ore grindability was determined using the standard Bond test. The mineral liberation analysis was conducted using ThermoFisher Scientific Mineral Liberation Analyzer (MLA) 650, where unsieved polished grain mounts (80% passing 700 µm) were used as MLA feed. Two MLA measurement modes, X-ray modal analysis (XMOD) and sparse phase liberation-grain X-ray mapping analysis (SPL-GXMAP), were employed. At least two cyanide consumers (Cu, Fe, Pb, and Zn) and kinetics impeders (Mn, S, As, and Bi) were present in all locations investigated. Copper content at Kapanda (0.77% Cu) and Ibindi (7.48% Cu) exceeded the recommended threshold of 0.5% Cu for direct cyanidation. The gold ore at Ibindi indicated a higher rate of grinding compared to other locations. This could be explained by the highest grindability (2.119 g/rev.) and lowest Bond work index (10.213 kWh/t) values. The pyrite-marcasite, chalcopyrite, galena, and siderite were identified as major gold, copper, lead, and iron-bearing minerals, respectively, with potential for economic extraction. However, only gold and copper can be recovered under conventional milling because of grain size issues (galena is exposed by 10%) and process complexity (difficult to concentrate and smelt iron from siderite). Therefore, the preliminary process selection is copper flotation followed by gold cyanidation for Kapanda and Ibindi ores, whereas gold cyanidation with additives such as glycine or ammonia is selected for Mawemeru and Itumbi ores because of low concentrations of Cu, Pb, Fe, and Zn minerals.Keywords: complex gold ores, mineral liberation, ore characterization, ore grindability
Procedia PDF Downloads 72407 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values
Authors: Dimiter M. Dimitrov, Abdullah Sadaawi
Abstract:
The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.Keywords: large-scale assessment, reliability, generalizability theory, plausible values
Procedia PDF Downloads 17406 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species
Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel
Abstract:
Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis
Procedia PDF Downloads 82405 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity
Authors: Anamika Sahu
Abstract:
The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.Keywords: MASW, mechanical, petrophysical, site characterization
Procedia PDF Downloads 83404 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate
Procedia PDF Downloads 123403 Rheometer Enabled Study of Tissue/biomaterial Frequency-Dependent Properties
Authors: Polina Prokopovich
Abstract:
Despite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.Keywords: tissue, rheometer, biomaterial, cartilage
Procedia PDF Downloads 79402 Control of Lymphatic Remodelling by miR-132
Authors: Valeria Arcucci, Musarat Ishaq, Steven A. Stacker, Greg J. Goodall, Marc G. Achen
Abstract:
Metastasis is the lethal aspect of cancer for most patients. Remodelling of lymphatic vessels associated with a tumour is a key initial step in metastasis because it facilitates the entry of cancer cells into the lymphatic vasculature and their spread to lymph nodes and distant organs. Although it is clear that vascular endothelial growth factors (VEGFs), such as VEGF-C and VEGF-D, are key drivers of lymphatic remodelling, the means by which many signaling pathways in endothelial cells are coordinately regulated to drive growth and remodelling of lymphatics in cancer is not understood. We seek to understand the broader molecular mechanisms that control cancer metastasis, and are focusing on microRNAs, which coordinately regulate signaling pathways involved in complex biological responses in health and disease. Here, using small RNA sequencing, we found that a specific microRNA, miR-132, is upregulated in expression in lymphatic endothelial cells (LECs) in response to the lymphangiogenic growth factors. Interestingly, ectopic expression of miR-132 in LECs in vitro stimulated proliferation and tube formation of these cells. Moreover, miR-132 is expressed in lymphatic vessels of a subset of human breast tumours which were previously found to express high levels of VEGF-D by immunohistochemical analysis on tumour tissue microarrays. In order to dissect the complexity of regulation by miR-132 in lymphatic biology, we performed Argonaute HITS-CLIP, which led us to identify the miR-132-mRNA interactome in LECs. We found that this microRNA in LECs is involved in the control of many different pathways mainly involved in cell proliferation and regulation of the extracellular matrix and cell-cell junctions. We are now exploring the functional significance of miR-132 targets in the biology of LECs using biochemical techniques, functional in vitro cell assays and in vivo lymphangiogenesis assays. This project will ultimately define the molecular regulation of lymphatic remodelling by miR-132, and thereby identify potential therapeutic targets for drugs designed to restrict the growth and remodelling of tumour lymphatics resulting in metastatic spread.Keywords: argonaute HITS-CLIP, cancer, lymphatic remodelling, miR-132, VEGF
Procedia PDF Downloads 126401 The Role of Artificial Intelligence in Patent Claim Interpretation: Legal Challenges and Opportunities
Authors: Mandeep Saini
Abstract:
The rapid advancement of Artificial Intelligence (AI) is transforming various fields, including intellectual property law. This paper explores the emerging role of AI in interpreting patent claims, a critical and highly specialized area within intellectual property rights. Patent claims define the scope of legal protection granted to an invention, and their precise interpretation is crucial in determining the boundaries of the patent holder's rights. Traditionally, this interpretation has relied heavily on the expertise of patent examiners, legal professionals, and judges. However, the increasing complexity of modern inventions, especially in fields like biotechnology, software, and electronics, poses significant challenges to human interpretation. Introducing AI into patent claim interpretation raises several legal and ethical concerns. This paper addresses critical issues such as the reliability of AI-driven interpretations, the potential for algorithmic bias, and the lack of transparency in AI decision-making processes. It considers the legal implications of relying on AI, particularly regarding accountability for errors and the potential challenges to AI interpretations in court. The paper includes a comparative study of AI-driven patent claim interpretations versus human interpretations across different jurisdictions to provide a comprehensive analysis. This comparison highlights the variations in legal standards and practices, offering insights into how AI could impact the harmonization of international patent laws. The paper proposes policy recommendations for the responsible use of AI in patent law. It suggests legal frameworks that ensure AI tools complement, rather than replace, human expertise in patent claim interpretation. These recommendations aim to balance the benefits of AI with the need for maintaining trust, transparency, and fairness in the legal process. By addressing these critical issues, this research contributes to the ongoing discourse on integrating AI into the legal field, specifically within intellectual property rights. It provides a forward-looking perspective on how AI could reshape patent law, offering both opportunities for innovation and challenges that must be carefully managed to protect the integrity of the legal system.Keywords: artificial intelligence (ai), patent claim interpretation, intellectual property rights, algorithmic bias, natural language processing, patent law harmonization, legal ethics
Procedia PDF Downloads 21400 Tectogenesis Around Kalaat Es Senan, Northwest of Tunisia: Structural, Geophysical and Gravimetric Study
Authors: Amira Rjiba, Mohamed Ghanmi, Tahar Aifa, Achref Boulares
Abstract:
This study, involving the interpretation of geological outcrops data (structures, and lithostratigraphiec colones) and subsurface structures (seismic and gravimetric data) help us to identify and precise (i) the lithology of the sedimentary formations between the Aptian and the recent formations, (ii) to differentiate the sedimentary formations it from the salt-bearing Triassic (iii) and to specify the major structures though the tectonics effects having affected the region during its geological evolution. By placing our study area placed in the context of Tunisia, located on the southern margin of the Tethys show us through tectonic traces and structural analysis conducted, that this area was submitted during the Triassic perio at an active rifting triggered extensional tectonic events and extensive respectively in the Cretaceous and Paleogene. Lithostratigraphic correlations between outcrops and seismic data sets on those of six oil wells conducted in the region have allowed us to better understand the structural complexity and the role of different tectonic faults having contributed to the current configuration, and marked by the current rifts. Indeed, three directions of NW-SE faults, NNW-SSE to NS and NE-SW to EW had a major role in the genesis of folds and open ditches collapse of NW-SE direction. These results were complemented by seismic reflection data to clarify the geometry of the southern and western areas of Kalaa Khasba ditch. The eight selected seismic lines for this study allowed to characterize the main structures, with isochronous maps, contour and isovitesse of Serdj horizon that presents the main reservoir in the region. The line L2, keyed by the well 6, helped highlight the NW-SE compression that has resulted in persistent discrepancies widely identifiable in its lithostratigraphic column. The gravity survey has confirmed the extension of most of the accidents deep subsurface whose activity seems to go far. Gravimetry also reinforced seismic interpretation confirming, at the L2 well, that both SW and NE flank of the moat are two opposite faults and trace the boundaries of NNW-SSE direction graben whose sedimentation of Mio-Pliocene age and Quaternary.Keywords: graben, graben collapse, gravity, Kalat Es Senan, seismic, tectogenesis
Procedia PDF Downloads 366399 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method
Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota
Abstract:
The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.Keywords: BOS method, underexpanded jet, abel transformation, density field visualization
Procedia PDF Downloads 76398 Dynamic Modeling of the Impact of Chlorine on Aquatic Species in Urban Lake Ecosystem
Authors: Zhiqiang Yan, Chen Fan, Yafei Wang, Beicheng Xia
Abstract:
Urban lakes play an invaluable role in urban water systems such as flood control, water supply, and public recreation. However, over 38% of the urban lakes have suffered from severe eutrophication in China. Chlorine that could remarkably inhibit the growth of phytoplankton in eutrophic, has been widely used in the agricultural, aquaculture and industry in the recent past. However, little information has been reported regarding the effects of chlorine on the lake ecosystem, especially on the main aquatic species.To investigate the ecological response of main aquatic species and system stability to chlorine interference in shallow urban lakes, a mini system dynamic model was developed based on the competition and predation of main aquatic species and total phosphorus circulation. The main species of submerged macrophyte, phytoplankton, zooplankton, benthos, spiroggra and total phosphorus in water and sediment were used as variables in the model,while the interference of chlorine on phytoplankton was represented by an exponential attenuation equation. Furthermore, the eco-exergy expressing the development degree of ecosystem was used to quantify the complexity of the shallow urban lake. The model was validated using the data collected in the Lotus Lake in Guangzhoufrom1 October 2015 to 31 January 2016.The correlation coefficient (R), root mean square error-observations standard deviation ratio (RSR) and index of agreement (IOA) were calculated to evaluate accuracy and reliability of the model.The simulated values showed good qualitative agreement with the measured values of all components. The model results showed that chlorine had a notable inhibitory effect on Microcystis aeruginos,Rachionus plicatilis, Diaphanosoma brachyurum Liévin and Mesocyclops leuckarti (Claus).The outbreak of Spiroggra.spp. inhibited the growth of Vallisneria natans (Lour.) Hara, leading to a gradual decrease of eco-exergy and the breakdown of ecosystem internal equilibria. This study gives important insight into using chlorine to achieve eutrophication control and understand mechanism process.Keywords: system dynamic model, urban lake, chlorine, eco-exergy
Procedia PDF Downloads 234397 The Role of the Basel Accords in Mitigating Systemic Risk
Authors: Wassamon Kun-Amornpong
Abstract:
When a financial crisis occurs, there will be a law and regulatory reform in order to manage the turmoil and prevent a future crisis. One of the most important regulatory efforts to help cope with systemic risk and a financial crisis is the third version of the Basel Accord. Basel III has introduced some measures and tools (e.g., systemic risk buffer, countercyclical buffer, capital conservation buffer and liquidity risk) in order to mitigate systemic risk. Nevertheless, the effectiveness of these measures in Basel III in adequately addressing the problem of contagious runs that can quickly spread throughout the financial system is questionable. This paper seeks to contribute to the knowledge regarding the role of the Basel Accords in mitigating systemic risk. The research question is to what extent the Basel Accords can help control systemic risk in the financial markets? The paper tackles this question by analysing the concept of systemic risk. It will then examine the weaknesses of the Basel Accords before and after the Global financial crisis in 2008. Finally, it will suggest some possible solutions in order to improve the Basel Accord. The rationale of the study is the fact that academic works on systemic risk and financial crises are largely studied from economic or financial perspective. There is comparatively little research from the legal and regulatory perspective. The finding of the paper is that there are some problems in all of the three pillars of the Basel Accords. With regards to Pillar I, the risk model is excessively complex while the benefits of its complexity are doubtful. Concerning Pillar II, the effectiveness of the risk-based supervision in preventing systemic risk still depends largely upon its design and implementation. Factors such as organizational culture of the regulator and the political context within which the risk-based supervision operates might be a barrier against the success of Pillar II. Meanwhile, Pillar III could not provide adequate market discipline as market participants do not always act in a rational way. In addition, the too-big-to-fail perception reduced the incentives of the market participants to monitor risks. There has been some development in resolution measure (e.g. TLAC and MREL) which might potentially help strengthen the incentive of the market participants to monitor risks. However, those measures have some weaknesses. The paper argues that if the weaknesses in the three pillars are resolved, it can be expected that the Basel Accord could contribute to the mitigation of systemic risk in a more significant way in the future.Keywords: Basel accords, financial regulation, risk-based supervision, systemic risk
Procedia PDF Downloads 124396 Cfd Simulation for Urban Environment for Evaluation of a Wind Energy Potential of a Building or a New Urban Planning
Authors: David Serero, Loic Couton, Jean-Denis Parisse, Robert Leroy
Abstract:
This paper presents an analysis method of airflow at the periphery of several typologies of architectural volumes. To understand the complexity of the urban environment on the airflows in the city, we compared three sites at different architectural scale. The research sets a method to identify the optimal location for the installation of wind turbines on the edges of a building and to achieve an improvement in the performance of energy extracted by precise localization of an accelerating wing called “aero foil”. The objective is to define principles for the installation of wind turbines and natural ventilation design of buildings. Instead of theoretical winds analysis, we combined numerical aeraulic simulations using STAR CCM + software with wind data, over long periods of time (greater than 1 year). If airflows computer fluid analysis (CFD) simulation of buildings are current, we have calibrated a virtual wind tunnel with wind data using in situ anemometers (to establish localized cartography of urban winds). We can then develop a complete volumetric model of the behavior of the wind on a roof area, or an entire urban island. With this method, we can categorize: - the different types of wind in urban areas and identify the minimum and maximum wind spectrum, - select the type of harvesting devices - fixing to the roof of a building, - the altimetry of the device in relation to the levels of the roofs - The potential nuisances around. This study is carried out from the recovery of a geolocated data flow, and the connection of this information with the technical specifications of wind turbines, their energy performance and their speed of engagement. Thanks to this method, we can thus define the characteristics of wind turbines to maximize their performance in urban sites and in a turbulent airflow regime. We also study the installation of a wind accelerator associated with buildings. The “aerofoils which are integrated are improvement to control the speed of the air, to orientate it on the wind turbine, to accelerate it and to hide, thanks to its profile, the device on the roof of the building.Keywords: wind energy harvesting, wind turbine selection, urban wind potential analysis, CFD simulation for architectural design
Procedia PDF Downloads 148