Search results for: Honeycomb network
206 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 128205 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27204 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89203 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil
Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins
Abstract:
Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.Keywords: pesticides monitoring, são paulo state, water quality, surface waters
Procedia PDF Downloads 59202 Decision-Making, Expectations and Life Project in Dependent Adults Due to Disability
Authors: Julia Córdoba
Abstract:
People are not completely autonomous, as we live in society; therefore, people could be defined as relationally dependent. The lack, decrease or loss of physical, psychological and/or social interdependence due to a disability situation is known as dependence. This is related to the need for help from another person in order to carry out activities of daily living. This population group lives with major social limitations that significantly reduce their participation and autonomy. They have high levels of stigma and invisibility from private environments (family and close networks), as well as from the public order (environment, community). The importance of this study lies in the fact that the lack of support and adjustments leads to what authors call the circle of exclusion. This circle describes how not accessing services - due to the difficulties caused by the disability situation impacts biological, social and psychological levels. This situation produces higher levels of exclusion and vulnerability. This study will focus on the process of autonomy and dependence of adults with disability from the model of disability proposed by the International Classification of Functioning, Health and Disability (ICF). The objectives are: i) to write down the relationship between autonomy and dependence based on socio-health variables and ii) to determine the relationship between the situation of autonomy and dependence and the expectations and interests of the participants. We propose a study that will use a survey technique through a previously validated virtual questionnaire. The data obtained will be analyzed using quantitative and qualitative methods for the details of the profiles obtained. No less than 200 questionnaires will be administered to people between 18 and 64 years of age who self-identify as having some degree of dependency due to disability. For the analysis of the results, the two main variables of autonomy and dependence will be considered. Socio-demographic variables such as age, gender identity, area of residence and family composition will be used. In relation to the biological dimension of the situation, the diagnosis, if any, and the type of disability will be asked. For the description of these profiles of autonomy and dependence, the following variables will be used: self-perception, decision-making, interests, expectations and life project, care of their health condition, support and social network, and labor and educational inclusion. The relationship between the target population and the variables collected provides several guidelines that could form the basis for the analysis of other research of interest in terms of self-perception, autonomy and dependence. The areas and situations where people state that they have greater possibilities to decide and have a say will be obtained. It will identify social (networks and support, educational background), demographic (age, gender identity and residence) and health-related variables (diagnosis and type of disability, quality of care) that may have a greater relationship with situations of dependency or autonomy. It will be studied whether the level of autonomy and/or dependence has an impact on the type of expectations and interests of the people surveyed.Keywords: life project, disability, inclusion, autonomy
Procedia PDF Downloads 67201 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 379200 The Lived Experience of Pregnant Saudi Women Carrying a Fetus with Structural Abnormalities
Authors: Nasreen Abdulmannan
Abstract:
Fetal abnormalities are categorized as a structural abnormality, non-structural abnormality, or a combination of both. Fetal structural abnormalities (FSA) include, but are not limited, to Down syndrome, congenital diaphragmatic hernia, and cleft lip and palate. These abnormalities can be detected in the first weeks of pregnancy, which is almost around 9 - 20 weeks gestational. Etiological factors for FSA are unknown; however, transmitted genetic risk can be one of these factors. Consanguineous marriage often referred to as inbreeding, represents a significant risk factor for FSA due to the increased likelihood of deleterious genetic traits shared by both biological parents. In a country such as the Kingdom of Saudi Arabia (KSA), consanguineous marriage is high, which creates a significant risk of children being born with congenital abnormalities. Historically, the practice of consanguinity occurred commonly among European royalty. For example, Great Britain’s Queen Victoria married her German first cousin, Prince Albert of Coburg. Although a distant blood relationship, the United Kingdom’s Queen Elizabeth II married her cousin, Prince Philip of Greece and Denmark—both of them direct descendants of Queen Victoria. In Middle Eastern countries, a high incidence of consanguineous unions still exists, including in the KSA. Previous studies indicated that a significant gap exists in understanding the lived experiences of Saudi women dealing with an FSA-complicated pregnancy. Eleven participants were interviewed using a semi-structured interview format for this qualitative phenomenological study investigating the lived experiences of pregnant Saudi women carrying a child with FSA. This study explored the gaps in current literature regarding the lived experiences of pregnant Saudi women whose pregnancies were complicated by FSA. In addition, the researcher acquired knowledge about the available support and resources as well as the Saudi cultural perspective on FSA. This research explored the lived experiences of pregnant Saudi women utilizing Giorgi’s (2009) approach to data collection and data management. Findings for this study cover five major themes: (1) initial maternal reaction to the FSA diagnosis per ultrasound screening; (2) strengthening of the maternal relationship with God; (3) maternal concern for their child’s future; (4) feeling supported by their loved ones; and (5) lack of healthcare provider support and guidance. Future research in the KSA is needed to explore the network support for these mothers. This study recommended further clinical nursing research, nursing education, clinical practice, and healthcare policy/procedures to provide opportunities for improvement in nursing care and increase awareness in KSA society.Keywords: fetal structural abnormalities, psychological distress, health provider, health care
Procedia PDF Downloads 155199 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 267198 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 127197 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 79196 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 185195 European Commission Radioactivity Environmental Monitoring Database REMdb: A Law (Art. 36 Euratom Treaty) Transformed in Environmental Science Opportunities
Authors: M. Marín-Ferrer, M. A. Hernández, T. Tollefsen, S. Vanzo, E. Nweke, P. V. Tognoli, M. De Cort
Abstract:
Under the terms of Article 36 of the Euratom Treaty, European Union Member States (MSs) shall periodically communicate to the European Commission (EC) information on environmental radioactivity levels. Compilations of the information received have been published by the EC as a series of reports beginning in the early 1960s. The environmental radioactivity results received from the MSs have been introduced into the Radioactivity Environmental Monitoring database (REMdb) of the Institute for Transuranium Elements of the EC Joint Research Centre (JRC) sited in Ispra (Italy) as part of its Directorate General for Energy (DG ENER) support programme. The REMdb brings to the scientific community dealing with environmental radioactivity topics endless of research opportunities to exploit the near 200 millions of records received from MSs containing information of radioactivity levels in milk, water, air and mixed diet. The REM action was created shortly after Chernobyl crisis to support the EC in its responsibilities in providing qualified information to the European Parliament and the MSs on the levels of radioactive contamination of the various compartments of the environment (air, water, soil). Hence, the main line of REM’s activities concerns the improvement of procedures for the collection of environmental radioactivity concentrations for routine and emergency conditions, as well as making this information available to the general public. In this way, REM ensures the availability of tools for the inter-communication and access of users from the Member States and the other European countries to this information. Specific attention is given to further integrate the new MSs with the existing information exchange systems and to assist Candidate Countries in fulfilling these obligations in view of their membership of the EU. Article 36 of the EURATOM treaty requires the competent authorities of each MS to provide regularly the environmental radioactivity monitoring data resulting from their Article 35 obligations to the EC in order to keep EC informed on the levels of radioactivity in the environment (air, water, milk and mixed diet) which could affect population. The REMdb has mainly two objectives: to keep a historical record of the radiological accidents for further scientific study, and to collect the environmental radioactivity data gathered through the national environmental monitoring programs of the MSs to prepare the comprehensive annual monitoring reports (MR). The JRC continues his activity of collecting, assembling, analyzing and providing this information to public and MSs even during emergency situations. In addition, there is a growing concern with the general public about the radioactivity levels in the terrestrial and marine environment, as well about the potential risk of future nuclear accidents. To this context, a clear and transparent communication with the public is needed. EURDEP (European Radiological Data Exchange Platform) is both a standard format for radiological data and a network for the exchange of automatic monitoring data. The latest release of the format is version 2.0, which is in use since the beginning of 2002.Keywords: environmental radioactivity, Euratom, monitoring report, REMdb
Procedia PDF Downloads 443194 Evaluating the Business Improvement District Redevelopment Model: An Ethnography of a Tokyo Shopping Mall
Authors: Stefan Fuchs
Abstract:
Against the backdrop of the proliferation of shopping malls in Japan during the last two decades, this paper presents the results of an ethnography conducted at a recently built suburban shopping mall in Western Tokyo. Through the analysis of the lived experiences of local residents, mall customers and the mall management this paper evaluates the benefits and disadvantages of the Business Improvement District (BID) model, which was implemented as urban redevelopment strategy in the area surrounding the shopping mall. The results of this research project show that while the BID model has in some respects contributed to the economic prosperity and to the perceived convenience of the area, it has led to gentrification and the redevelopment shows some deficiencies with regard to the inclusion of the elderly population as well as to the democratization of the decision-making process within the area. In Japan, shopping malls have been steadily growing both in size and number since a series of deregulation policies was introduced in the year 2000 in an attempt to push the domestic economy and to rejuvenate urban landscapes. Shopping malls have thereby become defining spaces of the built environment and are arguably important places of social interaction. Notwithstanding the vital role they play as factors of urban transformation, they have been somewhat overlooked in the research on Japan; especially with respect to their meaning for people’s everyday lives. By examining the ways, people make use of space in a shopping mall the research project presented in this paper addresses this gap in the research. Moreover, the research site of this research project is one of the few BIDs of Japan and the results presented in this paper can give indication on the scope of the future applicability of this urban redevelopment model. The data presented in this research was collected during a nine-months ethnographic fieldwork in and around the shopping mall. This ethnography includes semi-structured interviews with ten key informants as well as direct and participant observations examining the lived experiences and perceptions of people living, shopping or working at the shopping mall. The analysis of the collected data focused on recurring themes aiming at ultimately capturing different perspectives on the same aspects. In this manner, the research project documents the social agency of different groups within one communal network. The analysis of the perceptions towards the urban redevelopment around the shopping mall has shown that mainly the mall customers and large businesses benefit from the BID redevelopment model. While local residents benefit to some extent from their neighbourhood becoming more convenient for shopping they perceive themselves as being disadvantaged by changing demographics due to rising living expenses, the general noise level and the prioritisation of a certain customer segment or age group at the shopping mall. Although the shopping mall examined in this research project is just an example, the findings suggest that in future urban redevelopment politics have to provide incentives for landowners and developing companies to think of other ways of transforming underdeveloped areas.Keywords: business improvement district, ethnography, shopping mall, urban redevelopment
Procedia PDF Downloads 136193 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition
Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman
Abstract:
Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat
Procedia PDF Downloads 146192 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages
Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour
Abstract:
The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout
Procedia PDF Downloads 150191 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 10190 Nuancing the Indentured Migration in Amitav Ghosh's Sea of Poppies
Authors: Murari Prasad
Abstract:
This paper is motivated by the implications of indentured migration depicted in Amitav Ghosh’s critically acclaimed novel, Sea of Poppies (2008). Ghosh’s perspective on the experiences of North Indian indentured labourers moving from their homeland to a distant and unknown location across the seas suggests a radical attitudinal change among the migrants on board the Ibis, a schooner chartered to carry the recruits from Calcutta to Mauritius in the late 1830s. The novel unfolds the life-altering trauma of the bonded servants, including their efforts to maintain a sense of self while negotiating significant social and cultural transformations during the voyage which leads to the breakdown of familiar life-worlds. Equally, the migrants are introduced to an alternative network of relationships to ensure their survival away from land. They relinquish their entrenched beliefs and prejudices and commit themselves to a new brotherhood formed by ‘ship siblings.’ With the official abolition of direct slavery in 1833, the supply of cheap labour to the sugar plantation in British colonies as far-flung as Mauritius and Fiji to East Africa and the Caribbean sharply declined. Around the same time, China’s attempt to prohibit the illegal importation of opium from British India into China threatened the lucrative opium trade. To run the ever-profitable plantation colonies with cheap labour, Indian peasants, wrenched from their village economies, were indentured to plantations as girmitiyas (vernacularized from ‘agreement’) by the colonial government using the ploy of an optional form of recruitment. After the British conquest of the Isle of France in 1810, Mauritius became Britain’s premier sugar colony bringing waves of Indian immigrants to the island. In the articulations of their subjectivities one notices how the recruits cope with the alienating drudgery of indenture, mitigate the hardships of the voyage and forge new ties with pragmatic acts of cultural syncretism in a forward-looking autonomous community of ‘ship-siblings’ following the fracture of traditional identities. This paper tests the hypothesis that Ghosh envisions a kind of futuristic/utopian political collectivity in a hierarchically rigid, racially segregated and identity-obsessed world. In order to ground the claim and frame the complex representations of alliance and love across the boundaries of caste, religion, gender and nation, the essential methodology here is a close textual analysis of the novel. This methodology will be geared to explicate the utopian futurity that the novel gestures towards by underlining new regulations of life during voyage and dissolution of multiple differences among the indentured migrants on board the Ibis.Keywords: indenture, colonial, opium, sugar plantation
Procedia PDF Downloads 398189 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 262188 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective
Authors: S. Fantin
Abstract:
This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.Keywords: cybersecurity, data protection, European Union, Japan
Procedia PDF Downloads 123187 Application of IoTs Based Multi-Level Air Quality Sensing for Advancing Environmental Monitoring in Pingtung County
Authors: Men An Pan, Hong Ren Chen, Chih Heng Shih, Hsing Yuan Yen
Abstract:
Pingtung County is located in the southernmost region of Taiwan. During the winter season, pollutants due to insufficient dispersion caused by the downwash of the northeast monsoon lead to the poor air quality of the County. Through the implementation of various control methods, including the application of permits of air pollution, fee collection of air pollution, control oil fume of catering sectors, smoke detection of diesel vehicles, regular inspection of locomotives, and subsidies for low-polluting vehicles. Moreover, to further mitigate the air pollution, additional alternative controlling strategies are also carried out, such as construction site control, prohibition of open-air agricultural waste burning, improvement of river dust, and strengthening of road cleaning operations. The combined efforts have significantly reduced air pollutants in the County. However, in order to effectively and promptly monitor the ambient air quality, the County has subsequently deployed micro-sensors, with a total of 400 IoTs (Internet of Things) micro-sensors for PM2.5 and VOC detection and 3 air quality monitoring stations of the Environmental Protection Agency (EPA), covering 33 townships of the County. The covered area has more than 1,300 listed factories and 5 major industrial parks; thus forming an Internet of Things (IoTs) based multi-level air quality monitoring system. The results demonstrate that the IoTs multi-level air quality sensors combined with other strategies such as “sand and gravel dredging area technology monitoring”, “banning open burning”, “intelligent management of construction sites”, “real-time notification of activation response”, “nighthawk early bird plan with micro-sensors”, “unmanned aircraft (UAV) combined with land and air to monitor abnormal emissions”, and “animal husbandry odour detection service” etc. The satisfaction improvement rate of air control, through a 2021 public survey, reached a high percentage of 81%, an increase of 46% as compared to 2018. For the air pollution complaints for the whole year of 2021, the total number was 4213 in contrast to 7088 in 2020, a reduction rate reached almost 41%. Because of the spatial-temporal features of the air quality monitoring IoTs system by the application of microsensors, the system does assist and strengthen the effectiveness of the existing air quality monitoring network of the EPA and can provide real-time control of the air quality. Therefore, the hot spots and potential pollution locations can be timely determined for law enforcement. Hence, remarkable results were obtained for the two years. That is, both reduction of public complaints and better air quality are successfully achieved through the implementation of the present IoTs system for real-time air quality monitoring throughout Pingtung County.Keywords: IoT, PM, air quality sensor, air pollution, environmental monitoring
Procedia PDF Downloads 73186 Employing Remotely Sensed Soil and Vegetation Indices and Predicting by Long Short-Term Memory to Irrigation Scheduling Analysis
Authors: Elham Koohikerade, Silvio Jose Gumiere
Abstract:
In this research, irrigation is highlighted as crucial for improving both the yield and quality of potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate soil moisture content, addressing the limitations of field data. Developed under the guidance of the Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing drought conditions and determining irrigation needs. This study validated the spectral characteristics of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture was developed using a machine learning approach combining model-based and satellite-based datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and times, with its accuracy verified through cross-validation and comparison with existing soil moisture datasets. The model effectively captures temporal dynamics, making it valuable for applications requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By identifying typical peak soil moisture values and observing distribution shapes, irrigation can be scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a uniform irrigation strategy might be effective across multiple parcels, with adjustments based on specific parcel characteristics and historical data trends. The application of the LSTM model to predict soil moisture and vegetation indices yielded mixed results. While the model effectively captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately predicting EVI, NDVI, and NMDI.Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation monitoring
Procedia PDF Downloads 41185 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes
Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert
Abstract:
In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theoryKeywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments
Procedia PDF Downloads 176184 How to “Eat” without Actually Eating: Marking Metaphor with Spanish Se and Italian Si
Authors: Cinzia Russi, Chiyo Nishida
Abstract:
Using data from online corpora (Spanish CREA, Italian CORIS), this paper examines the relatively understudied use of Spanish se and Italian si exemplified in (1) and (2), respectively. (1) El rojo es … el que se come a los demás. ‘The red (bottle) is the one that outshines/*eats the rest.’(2) … ebbe anche la saggezza di mangiarsi tutto il suo patrimonio. ‘… he even had the wisdom to squander/*eat all his estate.’ In these sentences, se/si accompanies the consumption verb comer/mangiare ‘to eat’, without which the sentences would not be interpreted appropriately. This se/si cannot readily be attributed to any of the multiple functions so far identified in the literature: reflexive, ergative, middle/passive, inherent, benefactive, and complete consumptive. In particular, this paper argues against the feasibility of a recent construction-based analysis of sentences like (1) and (2), which situates se/si within a prototype-based network of meanings all deriving from the central meaning of 'COMPLETE CONSUMPTION' (e.g., Alice se comió toda la torta/Alicesi è mangiata tutta la torta ‘John ate the whole cake’). Clearly, the empirical adequacy of such an account is undermined by the fact that the events depicted in the se/si-sentences at issue do not always entail complete consumption because they may lack an INCREMENTAL THEME, the distinguishing property of complete consumption. Alternatively, it is proposed that the sentences under analysis represent instances of verbal METAPHORICAL EXTENSION: se/si represents an explicit marker of this cognitive process, which has independently developed from the complete consumptive se/si, and the meaning extension is captured by the general tenets of Conceptual Metaphor Theory (CMT). Two conceptual domains, Source (DS) and target (DT), are related by similarity, assigning an appropriate metaphorical interpretation to DT. The domains paired here are comer/mangiare (DS) and comerse/mangiarsi (DT). The eating event (DS) involves (a) the physical process of xEATER grinding yFOOD-STUFF into pieces and swallowing it; and (b) the aspect of xEATER savoring yFOOD-STUFF and being nurtured by it. In the physical act of eating, xEATER has dominance and exercises his force over yFOOD-STUFF. This general sense of dominance and force is mapped onto DT and is manifested in the ways exemplified in (1) and (2), and many others. According to CMT, two other properties are observed in each pair of DS & DT. First, DS tends to be more physical and concrete and DT more abstract, and systematic mappings are established between constituent elements in DS and those in DT: xEATER corresponds to the element that destroys and yFOOD-STUFF to the element that is destroyed in DT, as exemplified in (1) and (2). Though the metaphorical extension marker se/si appears by far most frequently with comer/mangiare in the corpora, similar systematic mappings are observed in several other verb pairs, for example, jugar/giocare ‘to play (games)’ and jugarse/giocarsi ‘to jeopardize/risk (life, reputation, etc.)’, perder/perdere ‘to lose (an object)’ and perderse/perdersi ‘to miss out on (an event)’, etc. Thus, this study provides evidence that languages may indeed formally mark metaphor using means available to them.Keywords: complete consumption value, conceptual metaphor, Italian si/Spanish se, metaphorical extension.
Procedia PDF Downloads 53183 Digital Subsistence of Cultural Heritage: Digital Media as a New Dimension of Cultural Ecology
Authors: Dan Luo
Abstract:
With the climate change can exacerbate exposure of cultural heritage to climatic stressors, scholars pin their hope on digital technology can help the site avoid surprises. Virtual museum has been regarded as a highly effective technology that enables people to gain enjoyable visiting experience and immersive information about cultural heritage. The technology clearly reproduces the images of the tangible cultural heritage, and the aesthetic experience created by new media helps consumers escape from the realistic environment full of uncertainty. The new cultural anchor has appeared outside the cultural sites. This article synthesizes the international literature on the virtual museum by developing diagrams of Citespace focusing on the tangible cultural heritage and the alarmingly situation has emerged in the process of resolving climate change: (1) Digital collections are the different cultural assets for public. (2) The media ecology change people ways of thinking and meeting style of cultural heritage. (3) Cultural heritage may live forever in the digital world. This article provides a typical practice information to manage cultural heritage in a changing climate—the Dunhuang Mogao Grottoes in the far northwest of China, which is a worldwide cultural heritage site famous for its remarkable and sumptuous murals. This monument is a typical synthesis of art containing 735 Buddhist temples, which was listed by UNESCO as one of the World Cultural Heritage sites. The caves contain some extraordinary examples of Buddhist art spanning a period of 1,000 years - the architectural form, the sculptures in the caves, and the murals on the walls, all together constitute a wonderful aesthetic experience. Unfortunately, this magnificent treasure cave has been threatened by increasingly frequent dust storms and precipitation. The Dunhuang Academy has been using digital technology since the last century to preserve these immovable cultural heritages, especially the murals in the caves. And then, Dunhuang culture has become a new media culture after introduce the art to the world audience through exhibitions, VR, video, etc. The paper chooses qualitative research method that used Nvivo software to encode the collected material to answer this question. The author paid close attention to the survey in Dunhuang City, including participated in 10 exhibition and 20 salons that are Dunhuang-themed on network. What’s more, 308 visitors were interviewed who are fans of the art and have experienced Dunhuang culture online(6-75 years).These interviewees have been exposed to Dunhuang culture through different media, and they are acutely aware of the threat to this cultural heritage. The conclusion is that the unique halo of the cultural heritage was always emphasized, and digital media breeds twin brothers of cultural heritage. In addition, the digital media make it possible for cultural heritage to reintegrate into the daily life of the masses. Visitors gain the opportunity to imitate the mural figures through enlarged or emphasized images but also lose the perspective of understanding the whole cultural life. New media construct a new life aesthetics apart from the Authorized heritage discourse.Keywords: cultural ecology, digital twins, life aesthetics, media
Procedia PDF Downloads 81182 Comparative Appraisal of Polymeric Matrices Synthesis and Characterization Based on Maleic versus Itaconic Anhydride and 3,9-Divinyl-2,4,8,10-Tetraoxaspiro[5.5]-Undecane
Authors: Iordana Neamtu, Aurica P. Chiriac, Loredana E. Nita, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Alina Diaconu
Abstract:
In the last decade, the attention of many researchers is focused on the synthesis of innovative “intelligent” copolymer structures with great potential for different uses. This considerable scientific interest is stimulated by possibility of the significant improvements in physical, mechanical, thermal and other important specific properties of these materials. Functionalization of polymer in synthesis by designing a suitable composition with the desired properties and applications is recognized as a valuable tool. In this work is presented a comparative study of the properties of the new copolymers poly(maleic anhydride maleic-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) and poly(itaconic-anhydride-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) obtained by radical polymerization in dioxane, using 2,2′-azobis(2-methylpropionitrile) as free-radical initiator. The comonomers are able for generating special effects as for example network formation, biodegradability and biocompatibility, gel formation capacity, binding properties, amphiphilicity, good oxidative and thermal stability, good film formers, and temperature and pH sensitivity. Maleic anhydride (MA) and also the isostructural analog itaconic anhydride (ITA) as polyfunctional monomers are widely used in the synthesis of reactive macromolecules with linear, hyperbranched and self & assembled structures to prepare high performance engineering, bioengineering and nano engineering materials. The incorporation of spiroacetal groups in polymer structures improves the solubility and the adhesive properties, induce good oxidative and thermal stability, are formers of good fiber or films with good flexibility and tensile strength. Also, the spiroacetal rings induce interactions on ether oxygen such as hydrogen bonds or coordinate bonds with other functional groups determining bulkiness and stiffness. The synthesized copolymers are analyzed by DSC, oscillatory and rotational rheological measurements and dielectric spectroscopy with the aim of underlying the heating behavior, solution viscosity as a function of shear rate and temperature and to investigate the relaxation processes and the motion of functional groups present in side chain around the main chain or bonds of the side chain. Acknowledgments This work was financially supported by the grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-132/2014 “Magnetic biomimetic supports as alternative strategy for bone tissue engineering and repair’’ (MAGBIOTISS).Keywords: Poly(maleic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); Poly(itaconic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); DSC; oscillatory and rotational rheological analysis; dielectric spectroscopy
Procedia PDF Downloads 227181 Green Space and Their Possibilities of Enhancing Urban Life in Dhaka City, Bangladesh
Authors: Ummeh Saika, Toshio Kikuchi
Abstract:
Population growth and urbanization is a global phenomenon. As the rapid progress of technology, many cities in the international community are facing serious problems of urbanization. There is no doubt that the urbanization will proceed to have significant impact on the ecology, economy and society at local, regional, and global levels. The inhabitants of Dhaka city suffer from lack of proper urban facilities. The green spaces are needed for different functional and leisure activities of the urban dwellers. Again growing densification, a number of green space are transferred into open space in the Dhaka city. As a result greenery of the city's decreases gradually. Moreover, the existing green space is frequently threatened by encroachment. The role of green space, both at community and city level, is important to improve the natural environment and social ties for future generations. Therefore, it seems that the green space needs to be more effective for public interaction. The main objective of this study is to address the effectiveness of urban green space (Urban Park) of Dhaka City. Two approaches are selected to fulfill the study. Firstly, analyze the long-term spatial changes of urban green space using GIS and secondly, investigate the relationship of urban park network with physical and social environment. The case study site covers eight urban parks of Dhaka metropolitan area of Bangladesh. Two aspects (Physical and Social) are applied for this study. For physical aspect, satellite images and aerial photos of different years are used to find out the changes of urban parks. And for social aspect, methods are used as questionnaire survey, interview, observation, photographs, sketch and previous information of parks to analyze about the social environment of parks. After calculation of all data by descriptive statistics, result is shown by maps using GIS. According to physical size, parks of Dhaka city are classified into four types: Small, Medium, Large and Extra Large parks. The observed result showed that the physical and social environment of urban parks varies with their size. In small size parks physical environment is moderate by newly tree plantation and area expansion. However, in medium size parks physical environment are poor, example- tree decrease, exposed soil increase. On the other hand, physical environment of large size and extra large size parks are in good condition, because of plenty of vegetation and well management. Again based on social environment, in small size parks people mainly come from surroundings area and mainly used as waiting place. In medium-size parks, people come to attend various occasion from different places. In large size and extra large size parks, people come from every part of the city area for tourism purpose. Urban parks are important source of green space. Its influence both physical and social environment of urban area. Nowadays green space area gradually decreases and transfer into open space. The consequence of this research reveals that changes of urban parks influence both physical and social environment and also impact on urban life.Keywords: physical environment, social environment, urban life, urban parks
Procedia PDF Downloads 429180 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique
Authors: Harpal Singh, Sakshi Batra
Abstract:
The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.Keywords: discrete wavelet transform, robustness, video watermarking, watermark
Procedia PDF Downloads 224179 Moving beyond Learner Outcomes: Culturally Responsive Recruitment, Training and Workforce Development
Authors: Tanya Greathosue, Adrianna Taylor, Lori Darnel, Eileen Starr, Susie Ryder, Julie Clockston, Dawn Matera Bassett, Jess Retrum
Abstract:
The United States has an identified need to improve the social work mental and behavioral health workforce shortage with a focus on culturally diverse and responsive mental and behavioral health practitioners to adequately serve its rapidly growing multicultural communities. The U.S. is experiencing rapid demographic changes. Ensuring that mental and behavioral health services are effective and accessible for diverse communities is essential for improving overall health outcomes. In response to this need, we developed a training program focused on interdisciplinary collaboration, evidence-based practices, and culturally responsive services. The success of the training program, funded by the Health Resource Service Administration (HRSA) Behavioral Health Workforce Education and Training (BHWET), has provided the foundation for stage two of our programming. In addition to HRSA/BHWET, we are receiving funding from Colorado Access, a state workforce development initiative, and Kaiser Permanente, a healthcare provider network in the United States. We have moved beyond improved learner outcomes to increasing recruitment of historically excluded, disproportionately mistreated learners, mentorship of students to improve retention, and successful, culturally responsive, diverse workforce development. These authors will utilize a pretest-posttest comparison group design and trend analysis to evaluate the success of the training program. Comparison groups will be matched based on age, gender identification, race, income, as well as prior experience in the field, and time in the degree program. This article describes our culturally responsive training program. Our goals are to increase the recruitment and retention of historically excluded, disproportionately mistreated learners. We achieve this by integrating cultural humility and sensitivity training into educational curricula for our scholars who participate in cohort classroom and seminar learning. Additionally, we provide our community partners who serve as internship sites with ongoing continuing education on how to promote and develop inclusive and supportive work environments for our learners. This work will be of value to mental and behavioral health care practitioners who serve historically excluded and mistreated populations. Participants will learn about culturally informed best practices to increase recruitment and retention of culturally diverse learners. Additionally, participants will hear how to create a culturally responsive training program that encourages an inclusive community for their learners through cohort learning, mentoring, community networking, and critical accountability.Keywords: culturally diverse mental health practitioners, recruitment, mentorship, workforce development, underserved clinics, professional development
Procedia PDF Downloads 23178 Networked Media, Citizen Journalism and Political Participation in Post-Revolutionary Tunisia: Insight from a European Research Project
Authors: Andrea Miconi
Abstract:
The research will focus on the results of the Tempus European Project eMEDia dedicated to Cross-Media Journalism. The project is founded by the European Commission as it involves four European partners - IULM University, Tampere University, University of Barcelona, and the Mediterranean network Unimed - and three Tunisian Universities – IPSI La Manouba, Sfax and Sousse – along with the Tunisian Ministry for Higher Education and the National Syndicate of Journalists. The focus on Tunisian condition is basically due to the role played by digital activists in its recent history. The research is dedicated to the relationship between political participation, news-making practices and the spread of social media, as it is affecting Tunisian society. As we know, Tunisia during the Arab Spring had been widely considered as a laboratory for the analysis the use of new technologies for political participation. Nonetheless, the literature about the Arab Spring actually fell short in explaining the genesis of the phenomenon, on the one hand by isolating technologies as a casual factor in the spread of demonstrations, and on the other by analyzing North-African condition through a biased perspective. Nowadays, it is interesting to focus on the consolidation of the information environment three years after the uprisings. And what is relevant, only a close, in-depth analysis of Tunisian society is able to provide an explanation of its history, and namely of the part of digital media in the overall evolution of political system. That is why the research is based on different methodologies: desk stage, interviews, and in-depth analysis of communication practices. Networked journalism is the condition determined by the technological innovation on news-making activities: a condition upon which professional journalist can no longer be considered the only player in the information arena, and a new skill must be developed. Along with democratization, nonetheless, the so-called citizen journalism is also likely to produce some ambiguous effects, such as the lack of professional standards and the spread of information cascades, which may prove to be particularly dangerous in an evolving media market as the Tunisian one. This is why, according to the project, a new profile must be defined, which is able to manage this new condition, and which can be hardly reduced to the parameters of traditional journalistic work. Rather than simply using new devices for news visualization, communication professionals must also be able to dialogue with all new players and to accept the decentralized nature of digital environments. This networked nature of news-making seemed to emerge during the Tunisian revolution, when bloggers, journalists, and activists used to retweet each other. Nonetheless, this intensification of communication exchange was inspired by the political climax of the uprising, while all media, by definition, are also supposed to bring some effects on people’s state of mind, culture and daily life routines. That is why it is worth analyzing the consolidation of these practices in a normal, post-revolutionary situation.Keywords: cross-media, education, Mediterranean, networked journalism, social media, Tunisia
Procedia PDF Downloads 202177 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method
Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh
Abstract:
In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.Keywords: discrete element method, fluid flow, parametric study, sand production/bonds failure
Procedia PDF Downloads 323