Search results for: soil-post interaction modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7445

Search results for: soil-post interaction modeling

725 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 51
724 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 266
723 Returning to Work: A Qualitative Exploratory Study of Head and Neck Cancer Survivor Disability and Experience

Authors: Abi Miller, Eleanor Wilson, Claire Diver

Abstract:

Background: UK Head and Neck Cancer incidence and prevalence were rising related to better treatment outcomes and changed demographics. More people of working-age now survive Head and Neck Cancer. For individuals, work provides income, purpose, and social connection. For society, work increases economic productivity and reduces welfare spending. In the UK, a cancer diagnosis is classed as a disability and more disabled people leave the workplace than non-disabled people. Limited evidence exists on return-to-work after Head and Neck Cancer, with no UK qualitative studies. Head and Neck Cancer survivors appear to return to work less when compared to other cancer survivors. This study aimed to explore the effects of Head and Neck Cancer disability on survivors’ return-to-work experience. Methodologies: This was an exploratory qualitative study using a critical realist approach to carry out semi-structured one-off interviews with Head and Neck Cancer survivors who had returned to work. Interviews were informed by an interview guide and carried out remotely by Microsoft Teams or telephone. Interviews were transcribed verbatim, pseudonyms allocated, and transcripts anonymized. Data were interpreted using Reflexive Thematic Analysis. Findings: Thirteen Head and Neck Cancer survivors aged between 41 -63 years participated in interviews. Three major themes were derived from the data: changed identity and meaning of work after Head and Neck Cancer, challenging and supportive work experiences and impact of healthcare professionals on return-to-work. Participants described visible physical appearance changes, speech and eating challenges, mental health difficulties and psycho-social shifts following Head and Neck Cancer. These factors affected workplace re-integration, ability to carry out work duties, and work relationships. Most participants experienced challenging work experiences, including stigmatizing workplace interactions and poor communication from managers or colleagues, which further affected participant confidence and mental health. Many participants experienced job change or loss, related both to Head and Neck Cancer and living through a pandemic. A minority of participants experienced strategies like phased return, which supported workplace re-integration. All participants, bar one, wanted conversations with healthcare professionals about return-to-work but perceived these conversations as absent. Conclusion: All participants found returning to work after Head and Neck Cancer to be a challenging experience. This appears to be impacted by participant physical, psychological, and functional disability following Head and Neck Cancer, work interaction and work context.

Keywords: disability, experience, head and neck cancer, qualitative, return-to-work

Procedia PDF Downloads 107
722 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes

Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun

Abstract:

Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.

Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces

Procedia PDF Downloads 127
721 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 69
720 A Comparative Study on the Use of Learning Resources in Learning Biochemistry by MBBS Students at Ras Al Khaimah Medical and Health Sciences University, UAE

Authors: B. K. Manjunatha Goud, Aruna Chanu Oinam

Abstract:

The undergraduate medical curriculum is oriented towards training the students to undertake the responsibilities of a physician. During the training period, adequate emphasis is placed on inculcating logical and scientific habits of thought; clarity of expression and independence of judgment; and ability to collect and analyze information and to correlate them. At Ras Al Khaimah Medical and Health Sciences University (RAKMHSU), Biochemistry a basic medical science subject is taught in the 1st year of 5 years medical course with vertical interdisciplinary interaction with all subjects, which needs to be taught and learned adequately by the students to be related to clinical case or clinical problem in medicine and future diagnostics so that they can practice confidently and skillfully in the community. Based on these facts study was done to know the extent of usage of library resources by the students and the impact of study materials on their preparation for examination. It was a comparative cross sectional study included 100 and 80 1st and 2nd-year students who had successfully completed Biochemistry course. The purpose of the study was explained to all students [participants]. Information was collected on a pre-designed, pre-tested and self-administered questionnaire. The questionnaire was validated by the senior faculties and pre tested on students who were not involved in the study. The study results showed that 80.30% and 93.15% of 1st and 2nd year students have the clear idea of course outline given in course handout or study guide. We also found a statistically significant number of students agreed that they were benefited from the practical session and writing notes in the class hour. A high percentage of students [50% and 62.02%] disagreed that that reading only the handouts is enough for their examination as compared to other students. The study also showed that only 35% and 41% of students visited the library on daily basis for the learning process, around 65% of students were using lecture notes and text books as a tool for learning and to understand the subject and 45% and 53% of students used the library resources (recommended text books) compared to online sources before the examinations. The results presented here show that students perceived that e-learning resources like power point presentations along with text book reading using SQ4R technique had made a positive impact on various aspects of their learning in Biochemistry. The use of library by students has overall positive impact on learning process especially in medical field enhances the outcome, and medical students are better equipped to treat the patient. But it’s also true that use of library use has been in decline which will impact the knowledge aspects and outcome. In conclusion, a student has to be taught how to use the library as learning tool apart from lecture handouts.

Keywords: medical education, learning resources, study guide, biochemistry

Procedia PDF Downloads 170
719 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 200
718 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study

Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz

Abstract:

Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.

Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres

Procedia PDF Downloads 164
717 Arguments against Innateness of Theory of Mind

Authors: Arkadiusz Gut, Robert Mirski

Abstract:

The nativist-constructivist debate constitutes a considerable part of current research on mindreading. Peter Carruthers and his colleagues are known for their nativist position in the debate and take issue with constructivist views proposed by other researchers, with Henry Wellman, Alison Gopnik, and Ian Apperly at the forefront. More specifically, Carruthers together with Evan Westra propose a nativistic explanation of Theory of Mind Scale study results that Wellman et al. see as supporting constructivism. While allowing for development of the innate mindreading system, Westra and Carruthers base their argumentation essentially on a competence-performance gap, claiming that cross-cultural differences in Theory of Mind Scale progression as well as discrepancies between infants’ and toddlers’ results on verbal and non-verbal false-belief tasks are fully explainable in terms of acquisition of other, pragmatic, cognitive developments, which are said to allow for an expression of the innately present Theory of Mind understanding. The goal of the present paper is to bring together arguments against the view offered by Westra and Carruthers. It will be shown that even though Carruthers et al.’s interpretation has not been directly controlled for in Wellman et al.’s experiments, there are serious reasons to dismiss such nativistic views which Carruthers et al. advance. The present paper discusses the following issues that undermine Carruthers et al.’s nativistic conception: (1) The concept of innateness is argued to be developmentally inaccurate; it has been dropped in many biological sciences altogether and many developmental psychologists advocate for doing the same in cognitive psychology. Reality of development is a complex interaction of changing elements that is belied by the simplistic notion of ‘the innate.’ (2) The purported innate mindreading conceptual system posited by Carruthers ascribes adult-like understanding to infants, ignoring the difference between first- and second-order understanding, between what can be called ‘presentation’ and ‘representation.’ (3) Advances in neurobiology speak strongly against any inborn conceptual knowledge; neocortex, where conceptual knowledge finds its correlates, is said to be largely equipotential at birth. (4) Carruthers et al.’s interpretations are excessively charitable; they extend results of studies done with 15-month-olds to conclusions about innateness, whereas in reality at that age there has been plenty of time for construction of the skill. (5) Looking-time experiment paradigm used in non-verbal false belief tasks that provide the main support for Carruthers’ argumentation has been criticized on methodological grounds. In the light of the presented arguments, nativism in theory of mind research is concluded to be an untenable position.

Keywords: development, false belief, mindreading, nativism, theory of mind

Procedia PDF Downloads 194
716 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 20
715 The Role of Goal Orientation on the Structural-Psychological Empowerment Link in the Public Sector

Authors: Beatriz Garcia-Juan, Ana B. Escrig-Tena, Vicente Roca-Puig

Abstract:

The aim of this article is to conduct a theoretical and empirical study in order to examine how the goal orientation (GO) of public employees affects the relationship between the structural and psychological empowerment that they experience at their workplaces. In doing so, we follow structural empowerment (SE) and psychological empowerment (PE) conceptualizations, and relate them to the public administration framework. Moreover, we review arguments from GO theories, and previous related contributions. Empowerment has emerged as an important issue in the public sector organization setting in the wake of mainstream New Public Management (NPM), the new orientation in the public sector that aims to provide a better service for citizens. It is closely linked to the drive to improve organizational effectiveness through the wise use of human resources. Nevertheless, it is necessary to combine structural (managerial) and psychological (individual) approaches in an integrative study of empowerment. SE refers to a set of initiatives that aim the transference of power from managerial positions to the rest of employees. PE is defined as psychological state of competence, self-determination, impact, and meaning that an employee feels at work. Linking these two perspectives will lead to arrive at a broader understanding of the empowerment process. Specifically in the public sector, empirical contributions on this relationship are therefore important, particularly as empowerment is a very useful tool with which to face the challenges of the new public context. There is also a need to examine the moderating variables involved in this relationship, as well as to extend research on work motivation in public management. It is proposed the study of the effect of individual orientations, such as GO. GO concept refers to the individual disposition toward developing or confirming one’s capacity in achievement situations. Employees’ GO may be a key factor at work and in workforce selection processes, since it explains the differences in personal work interests, and in receptiveness to and interpretations of professional development activities. SE practices could affect PE feelings in different ways, depending on employees’ GO, since they perceive and respond differently to such practices, which is likely to yield distinct PE results. The model is tested on a sample of 521 Spanish local authority employees. Hierarchical regression analysis was conducted to test the research hypotheses using SPSS 22 computer software. The results do not confirm the direct link between SE and PE, but show that learning goal orientation has considerable moderating power in this relationship, and its interaction with SE affects employees’ PE levels. Therefore, the combination of SE practices and employees’ high levels of LGO are important factors for creating psychologically empowered staff in public organizations.

Keywords: goal orientation, moderating effect, psychological empowerment, structural empowerment

Procedia PDF Downloads 265
714 Impact of UV on Toxicity of Zn²⁺ and ZnO Nanoparticles to Lemna minor

Authors: Gabriela Kalcikova, Gregor Marolt, Anita Jemec Kokalj, Andreja Zgajnar Gotvajn

Abstract:

Since the 90’s, nanotechnology is one of the fastest growing fields of science. Nanomaterials are increasingly becoming part of many products and technologies. Metal oxide nanoparticles are among the most used nanomaterials. Zinc oxide nanoparticles (nZnO) is widely used due to its versatile properties; it has been used in products including plastics, paints, food, batteries, solar cells and cosmetic products. It is also a very effective photocatalyst used for water treatment. Such expanding application of nZnO increases their possible occurrence in the environment. In the aquatic ecosystem nZnO interact with natural environmental factors such as UV radiation, and thus it is essential to evaluate possible interaction between them. In this context, the aim of our study was to evaluate combined ecotoxicity of nZnO and Zn²⁺ on duckweed Lemna minor in presence or absence UV. Inhibition of vegetative growth of duckweed Lemna minor was monitored over a period of 7 days in multi-well plates. After the experiment, specific growth rate was determined. ZnO nanoparticles used were of primary size 13.6 ± 1.7 nm. The test was conducted with nominal nZnO and Zn²⁺ (in form of ZnCl₂) concentrations of 1, 10, 100 mg/L. Experiment was repeated with presence of natural intensity of UV (8h UV, 10 W/m² UVA, 0.5 W/m² UVB). Concentration of Zn during the test was determined by ICP-MS. In the regular experiment (absence of UV) the specific growth rate was slightly increased by low concentrations of nZnO and Zn²⁺ in comparison to control. However, 10 and 100 mg/L of Zn²⁺ resulted in 45% and 68% inhibition of the specific growth rate, respectively. In case of nZnO both concentrations (10 and 100 mg/L) resulted in similar ~ 30% inhibition and the response was not dose-dependent. The lack of the dose-response relationship is often observed in case of nanoparticles. The possible explanation is that the physical impact prevails instead of chemical ones. In the presence of UV the toxicity of Zn²⁺ was increased and 100 mg/L of Zn²⁺ caused total inhibition of the specific growth rate (100%). On the other hand, 100 mg/L of nZnO resulted in low inhibition (19%) in comparison to the experiment without UV (30%). It is thus expected, that tested nZnO is low photoactive, but could have a good UV absorption and/or reflective properties and thus protect duckweed against UV impacts. Measured concentration of Zn in the test suspension decreased only about 4% after 168h in the case of ZnCl₂. On the other hand concentration of Zn in nZnO test decreased by 80%. It is expected that nZnO were partially dissolved in the medium and at the same time agglomeration and sedimentation of particles took place and thus the concentration of Zn at the water level decreased. Results of our study indicated, that nZnO combined with UV of natural intensity does not increase toxicity of nZnO, but slightly protect the plant against UV negative effects. When Zn²⁺ and ZnO results are compared it seems that dissolved Zn plays a central role in the nZnO toxicity.

Keywords: duckweed, environmental factors, nanoparticles, toxicity

Procedia PDF Downloads 316
713 Density Interaction in Determinate and Indeterminate Faba Bean Types

Authors: M. Abd El Hamid Ezzat

Abstract:

Two field trials were conducted to study the effect of plant densities i.e., 190, 222, 266, 330 and 440 10³ plants ha⁻¹ on morphological characters, physiological and yield attributes of two faba bean types viz. determinate (FLIP-87 -117 strain) and indeterminate (c.v. Giza-461). The results showed that the indeterminate plants significantly surpassed the determinate plants in plant height at 75 and 90 days from sowing, number of leaves at all growth stages and dry matter accumulation at 45 and 90 days from sowing. Determinate plants possessed greater number of side branches than that of the indeterminate plants, but it was only significant at 90 days from sowing. Greater number of flowers were produced by the indeterminate plants than that of the determinate plants at 75 and 90 days from sowing, and although shedding was obvious in both types, it was greater in the determinate plants as compared with the indeterminate one at 90 days from sowing. Increasing plant density resulted in reductions in number of leaves, branches flowers and dry matter accumulation per plant of both faba bean types. However, plant height criteria took a reversible magnitude. Moreover, under all rates of plant densities the indeterminate type plants surpassed the determinate plants in all growth characters studied except for number of branches per plant at 90 days from sowing. The indeterminate plant leaves significantly contained greater concentrations of photosynthetic pigments i.e., chl. a, b and carotenoids than those found in the determinate plant leaves. Also, the data showed significant reduction in photosynthetic pigments concentration as planting density increases. Light extinction coefficient (K) values reached their maximum level at 60 days from sowing, then it declined sharply at 75 days from sowing. The data showed that the illumination inside the determinate faba bean canopies was better than the indeterminate plants. (K) values tended to increase as planting density increases, meanwhile, significant interactions were reported between faba bean type as planting density on (K) at all growth stages. Both of determinate and indeterminate faba bean plant leaves reached their maximum expansion at 75 days from sowing reflecting the highest LAI values, then their declined in the subsequent growth stage. The indeterminate faba bean plants significantly surpassed the determinate plants in LAI up to 75 days from sowing. Growth analysis showed that NAR, RGR and CGR reached their maximum rates at (60-75 days growth stage). Faba bean types did not differ significantly in NAR at the early growth stage. The indeterminate plants were able to grow faster with significant CGR values than the determinate plants. The indeterminate faba bean plants surpassed the determinate ones in number of seeds/pod and per plant, 100-seed weight, seed yield per plant and per hectare at all rates of plant density. Seed yield increased with increasing plant densities of both types. The highest seed yield was attained for both types 440 103 plants ha⁻¹.

Keywords: determinate, indeterminate faba bean, Physiological attributes, yield attributes

Procedia PDF Downloads 218
712 Effectiveness Factor for Non-Catalytic Gas-Solid Pyrolysis Reaction for Biomass Pellet Under Power Law Kinetics

Authors: Haseen Siddiqui, Sanjay M. Mahajani

Abstract:

Various important reactions in chemical and metallurgical industries fall in the category of gas-solid reactions. These reactions can be categorized as catalytic and non-catalytic gas-solid reactions. In gas-solid reaction systems, heat and mass transfer limitations put an appreciable influence on the rate of the reaction. The consequences can be unavoidable for overlooking such effects while collecting the reaction rate data for the design of the reactor. Pyrolysis reaction comes in this category that involves the production of gases due to the interaction of heat and solid substance. Pyrolysis is also an important step in the gasification process and therefore, the gasification reactivity majorly influenced by the pyrolysis process that produces the char, as a feed for the gasification process. Therefore, in the present study, a non-isothermal transient 1-D model is developed for a single biomass pellet to investigate the effect of heat and mass transfer limitations on the rate of pyrolysis reaction. The obtained set of partial differential equations are firstly discretized using the concept of ‘method of lines’ to obtain a set of ordinary differential equation with respect to time. These equations are solved, then, using MATLAB ode solver ode15s. The model is capable of incorporating structural changes, porosity variation, variation in various thermal properties and various pellet shapes. The model is used to analyze the effectiveness factor for different values of Lewis number and heat of reaction (G factor). Lewis number includes the effect of thermal conductivity of the solid pellet. Higher the Lewis number, the higher will be the thermal conductivity of the solid. The effectiveness factor was found to be decreasing with decreasing Lewis number due to the fact that smaller Lewis numbers retard the rate of heat transfer inside the pellet owing to a lower rate of pyrolysis reaction. G factor includes the effect of the heat of reaction. Since the pyrolysis reaction is endothermic in nature, the G factor takes negative values. The more the negative value higher will be endothermic nature of the pyrolysis reaction. The effectiveness factor was found to be decreasing with more negative values of the G factor. This behavior can be attributed to the fact that more negative value of G factor would result in more energy consumption by the reaction owing to a larger temperature gradient inside the pellet. Further, the analytical expressions are also derived for gas and solid concentrations and effectiveness factor for two limiting cases of the general model developed. The two limiting cases of the model are categorized as the homogeneous model and unreacted shrinking core model.

Keywords: effectiveness factor, G-factor, homogeneous model, lewis number, non-catalytic, shrinking core model

Procedia PDF Downloads 117
711 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 365
710 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 318
709 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 139
708 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 295
707 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 17
706 Characterization of Herberine Hydrochloride Nanoparticles

Authors: Bao-Fang Wen, Meng-Na Dai, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Xun-Bao Yin, Yu-Han Zhao, Hong-Wei Sun, Wei-Fen Zhang

Abstract:

A drug-loaded nanoparticles containing berberine hydrochloride (BH/FA-CTS-NPs) was prepared. The physicochemical characterizations of BH/FA-CTS-NPs and the inhibitory effect on the HeLa cells were investigated. Folic acid-conjugated chitosan (FA-CTS) was prepared by amino reaction of folic acid active ester and chitosan molecules; BH/FA-CTS-NPs were prepared using ionic cross-linking technique with BH as a model drug. The morphology and particle size were determined by Transmission Electron Microscope (TEM). The average diameters and polydispersity index (PDI) were evaluated by Dynamic Light Scattering (DLS). The interaction between various components and the nanocomplex were characterized by Fourier Transform Infrared Spectroscopy (FT-IR). The entrapment efficiency (EE), drug-loading (DL) and in vitro release were studied by UV spectrophotometer. The effect of cell anti-migratory and anti-invasive actions of BH/FA-CTS-NPs were investigated using MTT assays, wound healing assays, Annexin-V-FITC single staining assays, and flow cytometry, respectively. HeLa nude mice subcutaneously transplanted tumor model was established and treated with different drugs to observe the effect of BH/FA-CTS-NPs in vivo on HeLa bearing tumor. The BH/FA-CTS-NPs prepared in this experiment have a regular shape, uniform particle size, and no aggregation phenomenon. The results of DLS showed that mean particle size, PDI and Zeta potential of BH/FA-CTS NPs were (249.2 ± 3.6) nm, 0.129 ± 0.09, 33.6 ± 2.09, respectively, and the average diameter and PDI were stable in 90 days. The results of FT-IR demonstrated that the characteristic peaks of FA-CTS and BH/FA-CTS-NPs confirmed that FA-CTS cross-linked successfully and BH was encapsulated in NPs. The EE and DL amount were (79.3 ± 3.12) % and (7.24 ± 1.41) %, respectively. The results of in vitro release study indicated that the cumulative release of BH/FA-CTS NPs was (89.48±2.81) % in phosphate-buffered saline (PBS, pH 7.4) within 48h; these results by MTT assays and wund healing assays indicated that BH/FA-CTS NPs not only inhibited the proliferation of HeLa cells in a concentration and time-dependent manner but can induce apoptosis as well. The subcutaneous xenograft tumor formation rate of human cervical cancer cell line HeLa in nude mice was 98% after inoculation for 2 weeks. Compared with BH group and BH/CTS-NPs group, the xenograft tumor growth of BH/FA-CTS-NPs group was obviously slower; the result indicated that BH/FA-CTS-NPs could significantly inhibit the growth of HeLa xenograft tumor. BH/FA-CTS NPs with the sustained release effect could be prepared successfully by the ionic crosslinking method. Considering these properties, block proliferation and impairing the migration of the HeLa cell line, BH/FA-CTS NPs could be an important compound for consideration in the treatment of cervical cancer.

Keywords: folic-acid, chitosan, berberine hydrochloride, nanoparticles, cervical cancer

Procedia PDF Downloads 108
705 Digital Value Co-Creation: The Case of Worthy a Virtual Collaborative Museum across Europe

Authors: Camilla Marini, Deborah Agostino

Abstract:

Cultural institutions provide more than service-based offers; indeed, they are experience-based contexts. A cultural experience is a special event that encompasses a wide range of values which, for visitors, are primarily cultural rather than economic and financial. Cultural institutions have always been characterized by inclusivity and participatory practices, but the upcoming of digital technologies has put forward their interest in collaborative practices and the relationship with their audience. Indeed, digital technologies highly affected the cultural experience as it was conceived. Especially, museums, as traditional and authoritative cultural institutions, have been highly challenged by digital technologies. They shifted by a collection-oriented toward a visitor-centered approach, and digital technologies generated a highly interactive ecosystem in which visitors have an active role, shaping their own cultural experience. Most of the studies that investigate value co-creation in museums adopt a single perspective which is separately one of the museums or one of the users, but the analysis of the convergence/divergence of these perspectives is still emphasized. Additionally, many contributions focus on digital value co-creation as an outcome rather than as a process. The study aims to provide a joint perspective on digital value co-creation which include both museum and visitors. Also, it deepens the contribution of digital technologies in the value co-creation process, addressing the following research questions: (i) what are the convergence/divergence drivers on digital value co-creation and (ii) how digital technologies can be means of value co-creation? The study adopts an action research methodology that is based on the case of WORTHY, an educational project which involves cultural institutions and schools all around Europe, creating a virtual collaborative museum. It represents a valuable case for the aim of the study since it has digital technologies at its core, and the interaction through digital technologies is fundamental, all along with the experience. Action research has been identified as the most appropriate methodology for researchers to have direct contact with the field. Data have been collected through primary and secondary sources. Cultural mediators such as museums, teachers and students’ families have been interviewed, while a focus group has been designed to interact with students, investigating all the aspects of the cultural experience. Secondary sources encompassed project reports and website contents in order to deepen the perspective of cultural institutions. Preliminary findings highlight the dimensions of digital value co-creation in cultural institutions from a museum-visitor integrated perspective and the contribution of digital technologies in the value co-creation process. The study outlines a two-folded contribution that encompasses both an academic and a practitioner level. Indeed, it contributes to fulfilling the gap in cultural management literature about the convergence/divergence of service provider-user perspectives but it also provides cultural professionals with guidelines on how to evaluate the digital value co-creation process.

Keywords: co-creation, digital technologies, museum, value

Procedia PDF Downloads 135
704 Rumen Metabolites and Microbial Load in Fattening Yankasa Rams Fed Urea and Lime Treated Groundnut (Arachis Hypogeae) Shell in a Complete Diet

Authors: Bello Muhammad Dogon Kade

Abstract:

The study was conducted to determine the effect of a treated groundnut (Arachis hypogaea) shell in a complete diet on blood metabolites and microbial load in fattening Yankasa rams. The study was conducted at the Teaching and Research Farm (Small Ruminants Unit of Animal Science Department, Faculty of Agriculture, Ahmadu Bello University, Zaria. Each kilogram of groundnut shell was treated with 5% urea and 5% lime for treatments 2 (UTGNS) and 3 (LTGNS), respectively. For treatment 4 (ULTGNS), 1 kg of groundnut shell was treated with 2.5% urea and 2.5% lime, but the shell in treatment 1 was not treated (UNTGNS). Sixteen Yankasa rams were used and randomly assigned to the four treatment diets with four animals per treatment in a completely randomized design (CRD). The diet was formulated to have 14% crude protein (CP) content. Rumen fluid was collected from each ram at the end of the experiment at 0 and 4 hours post-feeding. The samples were then put in a 30 ml bottle and acidified with 5 drops of concentrated sulphuric (0.1N H₂SO4) acid to trap ammonia. The results of the blood metabolites showed that the mean values of NH₃-N differed significantly (P<0.05) among the treatment groups, with rams in the ULTGNS diet having the highest significant value (31.96 mg/L). TVFs were significantly (P<0.05) higher in rams fed UNTGNS diet and higher in total nitrogen; the effect of sampling periods revealed that NH3N, TVFs and TP were significantly (P<0.05) higher in rumen fluid collected 4hrs post feeding among the rams across the treatment groups, but rumen fluid pH was significantly (p<0.05) higher in 0-hour post-feeding in all the rams in the treatment diets. In the treatment and sampling period’s interaction effects, animals on the ULTGNS diet had the highest mean values of NH3N in both 0 and 4 hours post-feeding and were significantly (P<0.5) higher compared to rams on the other treatment diets. Rams on the UTGNS diet had the highest bacteria load of 4.96X105/ml, which was significantly (P<0.05) higher than a microbial load of animals fed UNTGNS, LTGNS and ULTGNS diets. However, protozoa counts were significantly (P<0.05) higher in rams fed the UTGNS diet than those followed by the ULTGNS diet. The results showed that there was no significant difference (P>0.05) in the bacteria count of the animals at both 0 and 4 hours post-feeding. But rumen fungi and protozoa load at 0 hours were significantly (P<0.05) higher than at 4 hours post-feeding. The use of untreated ground groundnut shells in the diet of fattening Yankasa ram is therefore recommended.

Keywords: blood metabolites, microbial load, volatile fatty acid, ammonia, total protein

Procedia PDF Downloads 40
703 An Acyclic Zincgermylene: Rapid H₂ Activation

Authors: Martin Juckel

Abstract:

Probably no other field of inorganic chemistry has undergone such a rapid development in the past two decades than the low oxidation state chemistry of main group elements. This rapid development has only been possible by the development of new bulky ligands. In case of our research group, super-bulky monodentate amido ligands and β-diketiminate ligands have been used to a great success. We first synthesized the unprecedented magnesium(I) dimer [ᴹᵉˢNacnacMg]₂ (ᴹᵉˢNacnac = [(ᴹᵉˢNCMe)₂CH]-; Mes = mesityl, which has since been used both as reducing agent and also for the synthesis of new metal-magnesium bonds. In case of the zinc bromide precursor [L*ZnBr] (L*=(N(Ar*)(SiPri₃); (Ar* = C₆H₂{C(H)Ph₂}₂Me-2,6,4, the reduction with [ᴹᵉˢNacnacMg]₂ led to such a metal-magnesium bond. This [L*ZnMg(ᴹᵉˢNacnac)] compound can be seen as an ‘inorganic Grignard reagent’, which can be used to transfer the metal fragment onto other functional groups or other metal centers; just like the conventional Grignard reagent. By simple addition of (TBoN)GeCl (TBoN = N(SiMe₃){B(DipNCH)₂) to the aforesaid compound, we were able to transfer the amido-zinc fragment to the Ge center of the germylene starting material and to synthesize the first example of a germanium(II)-zinc bond: [:Ge(TBoN)(ZnL*)]. While these reactions typically led to complex product mixture, [:Ge(TBoN)(ZnL*)] could be isolated as dark blue crystals in a good yield. This new compound shows interesting reactivity towards small molecules, especially dihydrogen gas. This is of special interest as dihydrogen is one of the more difficult small molecules to activate, due to its strong (BDE = 108 kcal/mol) and non-polar bond. In this context, the interaction between H₂ σ-bond with the tetrelylene p-Orbital (LUMO), with concomitant donation of the tetrelylene lone pair (HOMO) into the H₂ σ* orbital are responsible for the activation of dihydrogen gas. Accordingly, the narrower the HOMO-LUMO gap of tertelylene, the more reactivity towards H₂ it typically is. The aim of a narrow HOMO-LUMO gap was reached by transferring electropositive substituents respectively metal substituents with relatively low Pauling electronegativity (zinc: 1.65) onto the Ge center (here: the zinc-amido fragment). In consideration of the unprecedented reactivity of [:Ge(TBoN)(ZnL*)], a computational examination of its frontier orbital energies was undertaken. The energy separation between the HOMO, which has significant Ge lone pair character, and the LUMO, which has predominantly Ge p-orbital character, is narrow (40.8 kcal/mol; cf.∆S-T= 24.8 kcal/mol), and comparable to the HOMO-LUMO gaps calculated for other literature known complexes). The calculated very narrow HOMO-LUMO gap for the [:Ge(TBoN)(ZnL*)] complex is consistent with its high reactivity, and is remarkable considering that it incorporates a π-basic amide ligand, which are known to raise the LUMO of germylenes considerably.

Keywords: activation of dihydrogen gas, narrow HOMO-LUMO gap, first germanium(II)-zinc bond, inorganic Grignard reagent

Procedia PDF Downloads 173
702 Bibliometric Analysis of Risk Assessment of Inland Maritime Accidents in Bangladesh

Authors: Armana Huq, Wahidur Rahman, Sanwar Kader

Abstract:

Inland waterways in Bangladesh play an important role in providing comfortable and low-cost transportation. However, a maritime accident takes away many lives and creates unwanted hazards every year. This article deals with a comprehensive review of inland waterway accidents in Bangladesh. Additionally, it includes a comparative study between international and local inland research studies based on maritime accidents. Articles from inland waterway areas are analyzed in-depth to make a comprehensive overview of the nature of the academic work, accident and risk management process and different statistical analyses. It is found that empirical analysis based on the available statistical data dominates the research domain. For this study, major maritime accident-related works in the last four decades in Bangladesh (1981-2020) are being analyzed for preparing a bibliometric analysis. A study of maritime accidents of passenger's vessels during (1995-2005) indicates that the predominant causes of accidents in the inland waterways of Bangladesh are collision and adverse weather (77%), out of which collision due to human error alone stands (56%) of all accidents. Another study refers that the major causes of waterway accidents are the collision (60.3%) during 2005-2015. About 92% of this collision occurs due to direct contact with another vessel during this period. Rest 8% of the collision occurs by contact with permanent obstruction on waterway roots. The overall analysis of another study from the last 25 years (1995-2019) shows that one of the main types of accidents is collisions, with about 50.3% of accidents being caused by collisions. The other accident types are cyclone or storm (17%), overload (11.3%), physical failure (10.3%), excessive waves (5.1%), and others (6%). Very few notable works are available in testing or comparing the methods, proposing new methods for risk management, modeling, uncertainty treatment. The purpose of this paper is to provide an overview of the evolution of marine accident-related research domain regarding inland waterway of Bangladesh and attempts to introduce new ideas and methods to abridge the gap between international and national inland maritime-related work domain which can be a catalyst for a safer and sustainable water transportation system in Bangladesh. Another fundamental objective of this paper is to navigate various national maritime authorities and international organizations to implement risk management processes for shipping accident prevention in waterway areas.

Keywords: inland waterways, safety, bibliometric analysis, risk management, accidents

Procedia PDF Downloads 173
701 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building

Authors: A. Schuchter, M. Promegger

Abstract:

The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.

Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning

Procedia PDF Downloads 105
700 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 392
699 Assessment of Community Perceptions of Mangrove Ecosystem Services and Their Link to SDGs in Vanga, Kenya

Authors: Samson Obiene, Khamati Shilabukha, Geoffrey Muga, James Kairo

Abstract:

Mangroves play a vital role in the achievement of multiple goals of global sustainable development (SDG’s), particularly SDG SDG 14 (life under water). Their management, however, is faced with several shortcomings arising from inadequate knowledge on the perceptions of their ecosystem services, hence a need to map mangrove goods and services within SDGs while interrogating the disaggregated perceptions. This study therefore aimed at exploring the parities and disparities in attitudes and perceptions of mangrove ecosystem services among community members of Vanga and the link of the ecosystem services (ESs) to specific SDG targets. The study was based at the Kenya-Tanzania transboundary area in Vanga; where a carbon-offset project on mangroves is being up scaled. Mixed methods approach employing surveys, focus group discussions (FGDs) and reviews of secondary data were used in the study. A two stage cluster samplings was used to select the study population and the sample size. FGDs were conducted purposively selecting active participants in mangrove related activities with distinct socio-demographic characteristics. Sampled respondents comprised of males and females of different occupations and age groups. Secondary data review was used to select specific SDG targets against which mangrove ecosystem services identified through a value chain analysis were mapped. In Vanga, 20 ecosystem services were identified and categorized under supporting, cultural and aesthetic, provisioning and regulating services. According to the findings of this study, 63.9% (95% ci 56.6-69.3) perceived of the ESs as very important for economic development, 10.3% (95% ci 0-21.3) viewed them as important for environmental and ecological development while 25.8% (95% ci 2.2-32.8) were not sure of any role they play in development. In the social-economic disaggregation, ecosystem service values were found to vary with the level of interaction with the ecosystem which depended on gender and other social-economic classes within the study area. The youths, low income earners, women and those with low education levels were also identified as the primary beneficiaries of mangrove ecosystem services. The study also found that of the 17 SDGs, mangroves have a potential of influencing the achievement 12, including, SDGs 1, 2, 3, 4, 6, 8 10, 12, 13, 14, 15 and 17 either directly or indirectly. Generally therefore, the local community is aware of the critical importance mangroves for enhanced livelihood and ecological services but challenges in sustainability still occur as a result the diverse values and of the services and the contradicting interests of the different actors around the ecosystem. It is therefore important to consider parities in values and perception to avoid a ‘tragedy of the commons’ while striving to enhance sustainability of the Mangrove ecosystem.

Keywords: sustainable development, community values, socio-demographics, Vanga, mangrove ecosystem services

Procedia PDF Downloads 133
698 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 162
697 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 180
696 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 109