Search results for: local construction techniques
1256 The Origins of Representations: Cognitive and Brain Development
Authors: Athanasios Raftopoulos
Abstract:
In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.Keywords: mental representations, iconic representations, symbols, human evolution
Procedia PDF Downloads 541255 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces
Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov
Abstract:
The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms
Procedia PDF Downloads 2161254 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 1171253 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 791252 Land Transfer for New Township and Its Impact from Dwellers' Point of View: A Case Study of New Town Kolkata
Authors: Subhra Chattopadhyay
Abstract:
New Towns are usually built up at city-periphery with an eye to accommodate overspill population and functions of the city. ‘New towns are self-sufficient planned towns having a full range of urban economic and social activities, so it can provide employments for all of its inhabitants as well as a balanced self-content social community could be maintained’. In 3rd world countries New towns often emerge from scratch i.e on the area having no urban background and therefore, it needs a massive land conversion from rural to urban. This paper aims to study the implication of such land title transfer into rural sustainability with a case study at Jatragachi, New Town Kolkata. Broad objectives of this study are to understand 1. new changes in this area like i)changes in land use, ii) demographic changes, iii) occupational changes of the local people and 2.their view about new town planning. Major observations are stated below. The studied area was completely rural till recent years and is now at the heart of New Town Kolkata. Though this area is now under the jurisdiction of New Town Kolkata Development Authority (NKDA), it is still administrated by rural self-government.It creates administrative confusion and misuse of public capital. It is observed in this study that cultivation was the mainstay of livelihood for the majority of residents till recent past. There was a dramatic rise in irrigated area in the decade of 90’s pointing out agricultural prosperity.The area achieved the highest productivity of rice in the District. Percentage of marginal workers dropped significantly.In addition to it, ascending women’s literacy rate as found in this rural Mouza obviously indicates a constant social progress .Through land conversion, this flourishing agricultural land has been transformed into urban area with highly sophisticated uses. Such development may satisfy educated urban elite but the dwellers of the area suffer a lot. They bear the cost of new town planning through loss of their assured food and income as well as their place identity. The number of marginal workers increases abruptly. The growth of female literacy drops down. The area loses its functional linkages with its surroundings and fails to prove its actual growth potentiality. The physical linkages( like past roads and irrigation infrastructure) which had developed through time to support the economy become defunct. The ecological services which were provided by the agricultural field are denied. The historicity of this original site is demolished. Losses of the inhabitants of the area who have been evicted are also immense and cannot be materially compensated. Therefore, the ethos of such new town planning in stake of rural sustainability is under question. Need for an integrated approach for rural and urban development planning is felt in this study.Keywords: new town, sustainable development, growth potentiality, land transfer
Procedia PDF Downloads 3111251 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure
Abstract:
Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification
Procedia PDF Downloads 2731250 Impact of Collieries on Groundwater in Damodar River Basin
Authors: Rajkumar Ghosh
Abstract:
The industrialization of coal mining and related activities has a significant impact on groundwater in the surrounding areas of the Damodar River. The Damodar River basin, located in eastern India, is known as the "Ruhr of India" due to its abundant coal reserves and extensive coal mining and industrial operations. One of the major consequences of collieries on groundwater is the contamination of water sources. Coal mining activities often involve the excavation and extraction of coal through underground or open-pit mining methods. These processes can release various pollutants and chemicals into the groundwater, including heavy metals, acid mine drainage, and other toxic substances. As a result, the quality of groundwater in the Damodar River region has deteriorated, making it unsuitable for drinking, irrigation, and other purposes. The high concentration of heavy metals, such as arsenic, lead, and mercury, in the groundwater has posed severe health risks to the local population. Prolonged exposure to contaminated water can lead to various health problems, including skin diseases, respiratory issues, and even long-term ailments like cancer. The contamination has also affected the aquatic ecosystem, harming fish populations and other organisms dependent on the river's water. Moreover, the excessive extraction of groundwater for industrial processes, including coal washing and cooling systems, has resulted in a decline in the water table and depletion of aquifers. This has led to water scarcity and reduced availability of water for agricultural activities, impacting the livelihoods of farmers in the region. Efforts have been made to mitigate these issues through the implementation of regulations and improved industrial practices. However, the historical legacy of coal industrialization continues to impact the groundwater in the Damodar River area. Remediation measures, such as the installation of water treatment plants and the promotion of sustainable mining practices, are essential to restore the quality of groundwater and ensure the well-being of the affected communities. In conclusion, the coal industrialization in the Damodar River surrounding has had a detrimental impact on groundwater. This research focuses on soil subsidence induced by the over-exploitation of ground water for dewatering open pit coal mines. Soil degradation happens in arid and semi-arid regions as a result of land subsidence in coal mining region, which reduces soil fertility. Depletion of aquifers, contamination, and water scarcity are some of the key challenges resulting from these activities. It is crucial to prioritize sustainable mining practices, environmental conservation, and the provision of clean drinking water to mitigate the long-lasting effects of collieries on the groundwater resources in the region.Keywords: coal mining, groundwater, soil subsidence, water table, damodar river
Procedia PDF Downloads 781249 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 931248 Biodeterioration of Historic Parks of UK by Algae
Authors: Syeda Fatima Manzelat
Abstract:
This chapter investigates the biodeterioration of parks in the UK caused by lichens, focusing on Campbell Park and Great Linford Manor Park in Milton Keynes. The study first isolates and identifies potent biodeteriogens responsible for potential biodeterioration in these parks, enumerating and recording different classes and genera of lichens known for their biodeteriorative properties. It then examines the implications of lichens on biodeterioration at historic sites within these parks, considering impacts on historic structures, the environment, and associated health risks. Conservation strategies and preventive measures are discussed before concluding.Lichens, characterized by their symbiotic association between a fungus and an alga, thrive on various surfaces including building materials, soil, rock, wood, and trees. The fungal component provides structure and protection, while the algal partner performs photosynthesis. Lichens collected from the park sites, such as Xanthoria, Cladonia, and Arthonia, were observed affecting the historic walls, objects, and trees. Their biodeteriorative impacts were visible to the naked eye, contributing to aesthetic and structural damage. The study highlights the role of lichens as bioindicators of pollution, sensitive to changes in air quality. The presence and diversity of lichens provide insights into the air quality and pollution levels in the parks. However, lichens also pose health risks, with certain species causing respiratory issues, allergies, skin irritation, and other toxic effects in humans and animals. Conservation strategies discussed include regular monitoring, biological and chemical control methods, physical removal, and preventive cleaning. The study emphasizes the importance of a multifaceted, multidisciplinary approach to managing lichen-induced biodeterioration. Future management practices could involve advanced techniques such as eco-friendly biocides and self-cleaning materials to effectively control lichen growth and preserve historic structures. In conclusion, this chapter underscores the dual role of lichens as agents of biodeterioration and indicators of environmental quality. Comprehensive conservation management approaches, encompassing monitoring, targeted interventions, and advanced conservation methods, are essential for preserving the historic and natural integrity of parks like Campbell Park and Great Linford Manor Park.Keywords: biodeterioration, historic parks, algae, UK
Procedia PDF Downloads 301247 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments
Authors: Rohit Dey, Sailendra Karra
Abstract:
This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems
Procedia PDF Downloads 1361246 Surgical Hip Dislocation of Femoroacetabular Impingement: Survivorship and Functional Outcomes at 10 Years
Authors: L. Hoade, O. O. Onafowokan, K. Anderson, G. E. Bartlett, E. D. Fern, M. R. Norton, R. G. Middleton
Abstract:
Aims: Femoroacetabular impingement (FAI) was first recognised as a potential driver for hip pain at the turn of the last millennium. While there is an increasing trend towards surgical management of FAI by arthroscopic means, open surgical hip dislocation and debridement (SHD) remains the Gold Standard of care in terms of reported outcome measures. (1) Long-term functional and survivorship outcomes of SHD as a treatment for FAI are yet to be sufficiently reported in the literature. This study sets out to help address this imbalance. Methods: We undertook a retrospective review of our institutional database for all patients who underwent SHD for FAI between January 2003 and December 2008. A total of 223 patients (241 hips) were identified and underwent a ten year review with a standardised radiograph and patient-reported outcome measures questionnaire. The primary outcome measure of interest was survivorship, defined as progression to total hip arthroplasty (THA). Negative predictive factors were analysed. Secondary outcome measures of interest were survivorship to further (non-arthroplasty) surgery, functional outcomes as reflected by patient reported outcome measure scores (PROMS) scores, and whether a learning curve could be identified. Results: The final cohort consisted of 131 females and 110 males, with a mean age of 34 years. There was an overall native hip joint survival rate of 85.4% at ten years. Those who underwent a THA were significantly older at initial surgery, had radiographic evidence of preoperative osteoarthritis and pre- and post-operative acetabular undercoverage. In those whom had not progressed to THA, the average Non-arthritic Hip Score and Oxford Hip Score at ten year follow-up were 72.3% and 36/48, respectively, and 84% still deemed their surgery worthwhile. A learning curve was found to exist that was predicated on case selection rather than surgical technique. Conclusion: This is only the second study to evaluate the long-term outcomes (beyond ten years) of SHD for FAI and the first outside the originating centre. Our results suggest that, with correct patient selection, this remains an operation with worthwhile outcomes at ten years. How the results of open surgery compared to those of arthroscopy remains to be answered. While these results precede the advent of collison software modelling tools, this data helps set a benchmark for future comparison of other techniques effectiveness at the ten year mark.Keywords: femoroacetabular impingement, hip pain, surgical hip dislocation, hip debridement
Procedia PDF Downloads 811245 Engage, Connect, Empower: Agile Approach in the University Students' Education
Authors: D. Bjelica, T. Slavinski, V. Vukimrovic, D. Pavlovic, D. Bodroza, V. Dabetic
Abstract:
Traditional methods and techniques used in higher education may be significantly persuasive on the university students' perception about quality of the teaching process. Students’ satisfaction with the university experience may be affected by chosen educational approaches. Contemporary project management trends recognize agile approaches' beneficial, so modern practice highlights their usage, especially in the IT industry. A key research question concerns the possibility of applying agile methods in youth education. As agile methodology pinpoint iteratively-incremental delivery of results, its employment could be remarkably fruitful in education. This paper demonstrates the agile concept's application in the university students’ education through the continuous delivery of student solutions. Therefore, based on the fundamental values and principles of the agile manifest, paper will analyze students' performance and learned lessons in their encounter with the agile environment. The research is based on qualitative and quantitative analysis that includes sprints, as preparation and realization of student tasks in shorter iterations. Consequently, the performance of student teams will be monitored through iterations, as well as the process of adaptive planning and realization. Grounded theory methodology has been used in this research, as so as descriptive statistics and Man Whitney and Kruskal Wallis test for group comparison. Developed constructs of the model will be showcase through qualitative research, then validated through a pilot survey, and eventually tested as a concept in the final survey. The paper highlights the variability of educational curricula based on university students' feedbacks, which will be collected at the end of every sprint and indicates to university students' satisfaction inconsistency according to approaches applied in education. Values delivered by the lecturers will also be continuously monitored; thus, it will be prioritizing in order to students' requests. Minimal viable product, as the early delivery of results, will be particularly emphasized in the implementation process. The paper offers both theoretical and practical implications. This research contains exceptional lessons that may be applicable by educational institutions in curriculum creation processes, or by lecturers in curriculum design and teaching. On the other hand, they can be beneficial regarding university students' satisfaction increscent in respect of teaching styles, gained knowledge, or even educational content.Keywords: academic performances, agile, high education, university students' satisfaction
Procedia PDF Downloads 1281244 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 2571243 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 1591242 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 3421241 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide
Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh
Abstract:
Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration
Procedia PDF Downloads 1421240 Correlations and Impacts Of Optimal Rearing Parameters on Nutritional Value Of Mealworm (Tenebrio Molitor)
Authors: Fabienne Vozy, Anick Lepage
Abstract:
Insects are displaying high nutritional value, low greenhouse gas emissions, low land use requirements and high food conversion efficiency. They can contribute to the food chain and be one of many solutions to protein shortages. Currently, in North America, nutritional entomology is under-developed and the needs to better understand its benefits remain to convince large-scale producers and consumers (both for human and agricultural needs). As such, large-scale production of mealworms offers a promising alternative to replacing traditional sources of protein and fatty acids. To proceed orderly, it is required to collect more data on the nutritional values of insects such as, a) Evaluate the diets of insects to improve their dietary value; b) Test the breeding conditions to optimize yields; c) Evaluate the use of by-products and organic residues as sources of food. Among the featured technical parameters, relative humidity (RH) percentage and temperature, optimal substrates and hydration sources are critical elements, thus establishing potential benchmarks for to optimize conversion rates of protein and fatty acids. This research is to establish the combination of the most influential rearing parameters with local food residues, to correlate the findings with the nutritional value of the larvae harvested. 125 same-monthly old adults/replica are randomly selected in the mealworm breeding pool then placed to oviposit in growth chambers preset at 26°C and 65% RH. Adults are removed after 7 days. Larvae are harvested upon the apparition of the first nymphosis signs and batches, are analyzed for their nutritional values using wet chemistry analysis. The first samples analyses include total weight of both fresh and dried larvae, residual humidity, crude proteins (CP%), and crude fats (CF%). Further analyses are scheduled to include soluble proteins and fatty acids. Although they are consistent with previous published data, the preliminary results show no significant differences between treatments for any type of analysis. Nutritional properties of each substrate combination have yet allowed to discriminate the most effective residue recipe. Technical issues such as the particles’ size of the various substrate combinations and larvae screen compatibility are to be investigated since it induced a variable percentage of lost larvae upon harvesting. To address those methodological issues are key to develop a standardized efficient procedure. The aim is to provide producers with easily reproducible conditions, without incurring additional excessive expenditure on their part in terms of equipment and workforce.Keywords: entomophagy, nutritional value, rearing parameters optimization, Tenebrio molitor
Procedia PDF Downloads 1101239 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 621238 Investigating the Impact of Enterprise Resource Planning System and Supply Chain Operations on Competitive Advantage and Corporate Performance (Case Study: Mamot Company)
Authors: Mohammad Mahdi Mozaffari, Mehdi Ajalli, Delaram Jafargholi
Abstract:
The main purpose of this study is to investigate the impact of the system of ERP (Enterprise Resource Planning) and SCM (Supply Chain Management) on the competitive advantage and performance of Mamot Company. The methods for collecting information in this study are library studies and field research. A questionnaire was used to collect the data needed to determine the relationship between the variables of the research. This questionnaire contains 38 questions. The direction of the current research is applied. The statistical population of this study consists of managers and experts who are familiar with the SCM system and ERP. Number of statistical society is 210. The sampling method is simple in this research. The sample size is 136 people. Also, among the distributed questionnaires, Reliability of the Cronbach's Alpha Cronbach's Questionnaire is evaluated and its value is more than 70%. Therefore, it confirms reliability. And formal validity has been used to determine the validity of the questionnaire, and the validity of the questionnaire is confirmed by the fact that the score of the impact is greater than 1.5. In the present study, one variable analysis was used for central indicators, dispersion and deviation from symmetry, and a general picture of the society was obtained. Also, two variables were analyzed to test the hypotheses; measure the correlation coefficient between variables using structural equations, SPSS software was used. Finally, multivariate analysis was used with statistical techniques related to the SPLS structural equations to determine the effects of independent variables on the dependent variables of the research to determine the structural relationships between the variables. The results of the test of research hypotheses indicate that: 1. Supply chain management practices have a positive impact on the competitive advantage of the Mammoth industrial complex. 2. Supply chain management practices have a positive impact on the performance of the Mammoth industrial complex. 3. Planning system Organizational resources have a positive impact on the performance of the Mammoth industrial complex. 4. The system of enterprise resource planning has a positive impact on Mamot's competitive advantage. 5.The competitive advantage has a positive impact on the performance of the Mammoth industrial complex 6.The system of enterprise resource planning Mamot Industrial Complex Supply Chain Management has a positive impact. The above results indicate that the system of enterprise resource planning and supply chain management has an impact on the competitive advantage and corporate performance of Mamot Company.Keywords: enterprise resource planning, supply chain management, competitive advantage, Mamot company performance
Procedia PDF Downloads 971237 Response of Planktonic and Aggregated Bacterial Cells to Water Disinfection with Photodynamic Inactivation
Authors: Thayse Marques Passos, Brid Quilty, Mary Pryce
Abstract:
The interest in developing alternative techniques to obtain safe water, free from pathogens and hazardous substances, is growing in recent times. The photodynamic inactivation of microorganisms (PDI) is a promising ecologically-friendly and multi-target approach for water disinfection. It uses visible light as an energy source combined with a photosensitiser (PS) to transfer energy/electrons to a substrate or molecular oxygen generating reactive oxygen species, which cause cidal effects towards cells. PDI has mainly been used in clinical studies and investigations on its application to disinfect water is relatively recent. The majority of studies use planktonic cells. However, in their natural environments, bacteria quite often do not occur as freely suspended cells (planktonic) but in cell aggregates that are either freely floating or attached to surfaces as biofilms. Microbes can form aggregates and biofilms as a strategy to protect them from environmental stress. As aggregates, bacteria have a better metabolic function, they communicate more efficiently, and they are more resistant to biocide compounds than their planktonic forms. Among the bacteria that are able to form aggregates are members of the genus Pseudomonas, they are a very diverse group widely distributed in the environment. Pseudomonas species can form aggregates/biofilms in water and can cause particular problems in water distribution systems. The aim of this study was to evaluate the effectiveness of photodynamic inactivation in killing a range of planktonic cells including Escherichia coli DSM 1103, Staphylococcus aureus DSM 799, Shigella sonnei DSM 5570, Salmonella enterica and Pseudomonas putida DSM 6125, and aggregating cells of Pseudomonas fluorescens DSM 50090, Pseudomonas aeruginosa PAO1. The experiments were performed in glass Petri dishes, containing the bacterial suspension and the photosensitiser, irradiated with a multi-LED (wavelengths 430nm and 660nm) for different time intervals. The responses of the cells were monitored using the pour plate technique and confocal microscopy. The study showed that bacteria belonging to Pseudomonads group tend to be more tolerant to PDI. While E. coli, S. aureus, S. sonnei and S. enterica required a dosage ranging from 39.47 J/cm2 to 59.21 J/cm2 for a 5 log reduction, Pseudomonads needed a dosage ranging from 78.94 to 118.42 J/cm2, a higher dose being required when the cells aggregated.Keywords: bacterial aggregation, photoinactivation, Pseudomonads, water disinfection
Procedia PDF Downloads 2941236 Validity and Reliability of Communication Activities of Daily Living- Second Edition and Assessment of Language-related Functional Activities: Comparative Evidence from Arab Aphasics
Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari
Abstract:
Background: Validation of communication activities of daily living-second edition (CADL-2) and assessment of language-related functional activities (ALFA) tests is a critical investment decision, and activities related to language impairments often are underestimated. Literature indicates that age factors, and gender differences may affect the performance of the aphasics. Thus, understanding these influential factors is highly important to neuropsycholinguists and speech language pathologists (SLPs). Purpose: The goal of this study is twofold: (1) to in/validate CADL-2 and ALFA tests, and (2) to investigate whether or not the two assessment tests are reliable. Design: A comparative study is made between the results obtained from the analyses of the Arabic versions of CADL-2 and ALFA tests. Participants: The communication activities of daily-living and language-related functional activities were assessed from the obtained results of 100 adult aphasics (50 males, 50 females; ages 16 to 65). Procedures: Firstly, the two translated and standardized Arabic versions of CADL-2 and ALFA tests were introduced to the Arab aphasics under investigation. Armed with the new two versions of the tests, one of the researchers assessed the language-related functional communication and activities. Outcomes drawn from the obtained analysis of the comparative studies were then qualitatively and statistically analyzed. Main outcomes and Results: Regarding the validity of CADL-2 and ALFA, it is found that …. Is more valid in both pre-and posttests. Concerning the reliability of the two tests, it is found that ….is more reliable in both pre-and-posttests which undoubtedly means that …..is more trustable. Nor must we forget to indicate here that the relationship between age and gender was very weak due to that no remarkable gender differences between the two in both CADL-2 and ALFA pre-and-posttests. Conclusions & Implications: CADL-2 and ALFA tests were found to be valid and reliable tests. In contrast to previous studies, age and gender were not significantly associated with the results of validity and reliability of the two assessment tests. In clearer terms, age and gender patterns do not affect the validation of these two tests. Future studies might focus on complex questions including the use of CADL-2 and ALFA functionally; how gender and puberty influence the results in case the sample is large; the effects of each type of aphasia on the final outcomes, and measurements’ results of imaging techniques.Keywords: CADL-2, ALFA, comparison, language test, arab aphasics, validity, reliability, neuropsycholinguistics, comparison
Procedia PDF Downloads 351235 Holistic Solutions for Overcoming Fluoride Contamination Challenges in West Bengal, India: A Socio-economic Study on Water Quality, Infrastructure, and Community Engagement
Authors: Rajkumar Ghosh, Shyama Pada Gorai
Abstract:
Access to safe drinking water is a fundamental human right; however, regions like Purulia, Bankura, Birbhum, Malda, Dinajpur in West Bengal, India, face formidable challenges due to heightened fluoride levels. This paper delves into the hurdles of fresh drinking water production, presenting comprehensive solutions derived from literature reviews, field surveys, and scientific analyses. Encompassing fluoride-affected areas in Purulia, Bankura, Birbhum, Malda, North-South Dinajpur, and South 24 Parganas, the study emphasizes an integrated and sustainable approach. Employing a multidisciplinary methodology, combining scientific analysis and community engagement, the study identifies key factors influencing water quality and proposes sustainable strategies. Elevated fluoride concentrations exceeding international health standards (Purulia: 0.126 – 8.16 mg/L, Bankura: 0.1 – 12.2 mg/L, Malda: 0.1 – 4.54 mg/L, Birbhum: 0.023 – 18 mg/L) necessitate urgent intervention. Infrastructure deficiencies impede water treatment and distribution, while limited awareness obstructs community participation. The proposed solutions embrace advanced water treatment technologies, infrastructure development, community education, and sustainable water management practices. This comprehensive effort aims to provide clean drinking water, safeguarding the health of affected populations. Building on these foundations, the study explores the potential of rooftop rainwater harvesting as an effective and sustainable strategy to mitigate challenges in fresh drinking water production. By addressing fluoride contamination concerns and promoting community involvement, this approach presents a holistic solution to water quality issues in affected regions. The findings underscore the importance of integrating sustainable practices with community engagement to achieve long-term water security in Purulia, Bankura, Birbhum, Malda, North-South Dinajpur, and South 24 Parganas. This study serves as a cornerstone for further research and policy development, addressing fluoride contamination's impact on public health in affected areas. Recommendations include the establishment of long-term monitoring programs to assess the effectiveness of implemented solutions and conducting health impact studies to understand the long-term effects of fluoride contamination on the local population.Keywords: fluoride mitigation, rainwater harvesting, water quality, sustainable water management, community engagement
Procedia PDF Downloads 711234 Polyvinyl Alcohol Incorporated with Hibiscus Extract Microcapsules as Combined Active and Intelligent Composite Film for Meat Preservation
Authors: Ahmed F. Ghanem, Marwa I. Wahba, Asmaa N. El-Dein, Mohamed A. EL-Raey, Ghada E.A. Awad
Abstract:
Numerous attempts are being performed in order to formulate suitable packaging materials for meat products. However, to the best of our knowledge, the incorporation of free hibiscus extract or its microcapsules in the pure polyvinyl alcohol (PVA) matrix as packaging materials for meats is seldom reported. Therefore, this study aims at protection of the aqueous crude extract of hibiscus flowers utilizing spry drying encapsulation technique. Fourier transform infrared (FTIR), scanning electron microscope (SEM), and zetasizer results confirmed the successful formation of assembled capsules via strong interactions, spherical rough microparticles, and ~ 235 nm of particle size, respectively. Also, the obtained microcapsules enjoy high thermal stability, unlike the free extract. Then, the obtained spray-dried particles were incorporated into the casting solution of the pure PVA film with a concentration 10 wt. %. The segregated free-standing composite films were investigated, compared to the neat matrix, with several characterization techniques such as FTIR, SEM, thermal gravimetric analysis (TGA), mechanical tester, contact angle, water vapor permeability, and oxygen transmission. The results demonstrated variations in the physicochemical properties of the PVA film after the inclusion of the free and the extract microcapsules. Moreover, biological studies emphasized the biocidal potential of the hybrid films against microorganisms contaminating the meat. Specifically, the microcapsules imparted not only antimicrobial but also antioxidant activities to PVA. Application of the prepared films on the real meat samples displayed low bacterial growth with a slight increase in the pH over the storage time up to 10 days at 4 oC which further proved the meat safety. Moreover, the colors of the films did not significantly changed except after 21 days indicating the spoilage of the meat samples. No doubt, the dual-functional of prepared composite films pave the way towards combined active/smart food packaging applications. This would play a vital role in the food hygiene, including also quality control and assurance.Keywords: PVA, hibiscus, extraction, encapsulation, active packaging, smart and intelligent packaging, meat spoilage
Procedia PDF Downloads 811233 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics
Authors: Michael Wendland
Abstract:
This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment
Procedia PDF Downloads 671232 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System
Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng
Abstract:
Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging
Procedia PDF Downloads 3651231 Research on the Updating Strategy of Public Space in Small Towns in Zhejiang Province under the Background of New-Style Urbanization
Abstract:
Small towns are the most basic administrative institutions in our country, which are connected with cities and rural areas. Small towns play an important role in promoting local urban and rural economic development, providing the main public services and maintaining social stability in social governance. With the vigorous development of small towns and the transformation of industrial structure, the changes of social structure, spatial structure, and lifestyle are lagging behind, causing that the spatial form and landscape style do not belong to both cities and rural areas, and seriously affecting the quality of people’s life space and environment. The rural economy in Zhejiang Province has started, the society and the population are also developing in relative stability. In September 2016, Zhejiang Province set out the 'Technical Guidelines for Comprehensive Environmental Remediation of Small Towns in Zhejiang Province,' so as to comprehensively implement the small town comprehensive environmental remediation with the main content of strengthening the plan and design leading, regulating environmental sanitation, urban order and town appearance. In November 2016, Huzhou City started the comprehensive environmental improvement of small towns, strived to use three years to significantly improve the 115 small towns, as well as to create a number of high quality, distinctive and beautiful towns with features of 'clean and livable, rational layout, industrial development, poetry and painting style'. This paper takes Meixi Town, Zhangwu Town and Sanchuan Village in Huzhou City as the empirical cases, analyzes the small town public space by applying the relative theory of actor-network and space syntax. This paper also analyzes the spatial composition in actor and social structure elements, as well as explores the relationship of actor’s spatial practice and public open space by combining with actor-network theory. This paper introduces the relevant theories and methods of spatial syntax, carries out research analysis and design planning analysis of small town spaces from the perspective of quantitative analysis. And then, this paper proposes the effective updating strategy for the existing problems in public space. Through the planning and design in the building level, the dissonant factors produced by various spatial combination of factors and between landscape design and urban texture during small town development will be solved, inhabitant quality of life will be promoted, and town development vitality will be increased.Keywords: small towns, urbanization, public space, updating
Procedia PDF Downloads 2271230 Perovskite Nanocrystals and Quantum Dots: Advancements in Light-Harvesting Capabilities for Photovoltaic Technologies
Authors: Mehrnaz Mostafavi
Abstract:
Perovskite nanocrystals and quantum dots have emerged as leaders in the field of photovoltaic technologies, demonstrating exceptional light-harvesting abilities and stability. This study investigates the substantial progress and potential of these nano-sized materials in transforming solar energy conversion. The research delves into the foundational characteristics and production methods of perovskite nanocrystals and quantum dots, elucidating their distinct optical and electronic properties that render them well-suited for photovoltaic applications. Specifically, it examines their outstanding light absorption capabilities, enabling more effective utilization of a wider solar spectrum compared to traditional silicon-based solar cells. Furthermore, this paper explores the improved durability achieved in perovskite nanocrystals and quantum dots, overcoming previous challenges related to degradation and inconsistent performance. Recent advancements in material engineering and techniques for surface passivation have significantly contributed to enhancing the long-term stability of these nanomaterials, making them more commercially feasible for solar cell usage. The study also delves into the advancements in device designs that incorporate perovskite nanocrystals and quantum dots. Innovative strategies, such as tandem solar cells and hybrid structures integrating these nanomaterials with conventional photovoltaic technologies, are discussed. These approaches highlight synergistic effects that boost efficiency and performance. Additionally, this paper addresses ongoing challenges and research endeavors aimed at further improving the efficiency, stability, and scalability of perovskite nanocrystals and quantum dots in photovoltaics. Efforts to mitigate concerns related to material degradation, toxicity, and large-scale production are actively pursued, paving the way for broader commercial application. In conclusion, this paper emphasizes the significant role played by perovskite nanocrystals and quantum dots in advancing photovoltaic technologies. Their exceptional light-harvesting capabilities, combined with increased stability, promise a bright future for next-generation solar cells, ushering in an era of highly efficient and cost-effective solar energy conversion systems.Keywords: perovskite nanocrystals, quantum dots, photovoltaic technologies, light-harvesting, solar energy conversion, stability, device designs
Procedia PDF Downloads 951229 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity
Authors: Ladislav Écsi, Roland Jančo
Abstract:
Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility
Procedia PDF Downloads 1231228 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number
Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson
Abstract:
An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition
Procedia PDF Downloads 1601227 Targeted Delivery of Docetaxel Drug Using Cetuximab Conjugated Vitamin E TPGS Micelles Increases the Anti-Tumor Efficacy and Inhibit Migration of MDA-MB-231 Triple Negative Breast Cancer
Authors: V. K. Rajaletchumy, S. L. Chia, M. I. Setyawati, M. S. Muthu, S. S. Feng, D. T. Leong
Abstract:
Triple negative breast cancers (TNBC) can be classified as one of the most aggressive with a high rate of local recurrences and systematic metastases. TNBCs are insensitive to existing hormonal therapy or targeted therapies such as the use of monoclonal antibodies, due to the lack of oestrogen receptor (ER) and progesterone receptor (PR) and the absence of overexpression of human epidermal growth factor receptor 2 (HER2) compared with other types of breast cancers. The absence of targeted therapies for selective delivery of therapeutic agents into tumours, led to the search for druggable targets in TNBC. In this study, we developed a targeted micellar system of cetuximab-conjugated micelles of D-α-tocopheryl polyethylene glycol succinate (vitamin E TPGS) for targeted delivery of docetaxel as a model anticancer drug for the treatment of TNBCs. We examined the efficacy of our micellar system in xenograft models of triple negative breast cancers and explored the effect of the micelles on post-treatment tumours in order to elucidate the mechanism underlying the nanomedicine treatment in oncology. The targeting micelles were found preferentially accumulated in tumours immediately after the administration of the micelles compare to normal tissue. The fluorescence signal gradually increased up to 12 h at the tumour site and sustained for up to 24 h, reflecting the increases in targeted micelles (TPFC) micelles in MDA-MB-231/Luc cells. In comparison, for the non-targeting micelles (TPF), the fluorescence signal was evenly distributed all over the body of the mice. Only a slight increase in fluorescence at the chest area was observed after 24 h post-injection, reflecting the moderate uptake of micelles by the tumour. The successful delivery of docetaxel into tumour by the targeted micelles (TPDC) exhibited a greater degree of tumour growth inhibition than Taxotere® after 15 days of treatment. The ex vivo study has demonstrated that tumours treated with targeting micelles exhibit enhanced cell cycle arrest and attenuated proliferation compared with the control and with those treated non-targeting micelles. Furthermore, the ex vivo investigation revealed that both the targeting and non-targeting micellar formulations shows significant inhibition of cell migration with migration indices reduced by 0.098- and 0.28-fold, respectively, relative to the control. Overall, both the in vivo and ex vivo data increased the confidence that our micellar formulations effectively targeted and inhibited EGF-overexpressing MDA-MB-231 tumours.Keywords: biodegradable polymers, cancer nanotechnology, drug targeting, molecular biomaterials, nanomedicine
Procedia PDF Downloads 279