Search results for: Hybrid Matrix Composite
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5186

Search results for: Hybrid Matrix Composite

266 Customer Focus in Digital Economy: Case of Russian Companies

Authors: Maria Evnevich

Abstract:

In modern conditions, in most markets, price competition is becoming less effective. On the one hand, there is a gradual decrease in the level of marginality in main traditional sectors of the economy, so further price reduction becomes too ‘expensive’ for the company. On the other hand, the effect of price reduction is leveled, and the reason for this phenomenon is likely to be informational. As a result, it turns out that even if the company reduces prices, making its products more accessible to the buyer, there is a high probability that this will not lead to increase in sales unless additional large-scale advertising and information campaigns are conducted. Similarly, a large-scale information and advertising campaign have a much greater effect itself than price reductions. At the same time, the cost of mass informing is growing every year, especially when using the main information channels. The article presents generalization, systematization and development of theoretical approaches and best practices in the field of customer focus approach to business management and in the field of relationship marketing in the modern digital economy. The research methodology is based on the synthesis and content-analysis of sociological and marketing research and on the study of the systems of working with consumer appeals and loyalty programs in the 50 largest client-oriented companies in Russia. Also, the analysis of internal documentation on customers’ purchases in one of the largest retail companies in Russia allowed to identify if buyers prefer to buy goods for complex purchases in one retail store with the best price image for them. The cost of attracting a new client is now quite high and continues to grow, so it becomes more important to keep him and increase the involvement through marketing tools. A huge role is played by modern digital technologies used both in advertising (e-mailing, SEO, contextual advertising, banner advertising, SMM, etc.) and in service. To implement the above-described client-oriented omnichannel service, it is necessary to identify the client and work with personal data provided when filling in the loyalty program application form. The analysis of loyalty programs of 50 companies identified the following types of cards: discount cards, bonus cards, mixed cards, coalition loyalty cards, bank loyalty programs, aviation loyalty programs, hybrid loyalty cards, situational loyalty cards. The use of loyalty cards allows not only to stimulate the customer to purchase ‘untargeted’, but also to provide individualized offers, as well as to produce more targeted information. The development of digital technologies and modern means of communication has significantly changed not only the sphere of marketing and promotion, but also the economic landscape as a whole. Factors of competitiveness are the digital opportunities of companies in the field of customer orientation: personalization of service, customization of advertising offers, optimization of marketing activity and improvement of logistics.

Keywords: customer focus, digital economy, loyalty program, relationship marketing

Procedia PDF Downloads 142
265 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends

Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.

Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment

Procedia PDF Downloads 317
264 Bioflavonoids Derived from Mandarin Processing Wastes: Functional Hydrogels as a Sustainable Food Systems

Authors: Niharika Kaushal, Minni Singh

Abstract:

Fruit crops are widely cultivated throughout the World, with citrus being one of the most common. Mandarins, oranges, grapefruits, lemons, and limes are among the most frequently grown varieties. Citrus cultivars are industrially processed into juice, resulting in approx. 25-40% by wt. of biomass in the form of peels and seeds, generally considered as waste. In consequence, a significant amount of this nutraceutical-enriched biomass goes to waste, which, if utilized wisely, could revolutionize the functional food industry, as this biomass possesses a wide range of bioactive compounds, mainly within the class of polyphenols and terpenoids, making them an abundant source of functional bioactive. Mandarin is a potential source of bioflavonoids with putative antioxidative properties, and its potential application for developing value-added products is obvious. In this study, ‘kinnow’ mandarin (Citrus nobilis X Citrus deliciosa) biomass was studied for its flavonoid profile. For this, dried and pulverized peels were subjected to green and sustainable extraction techniques, namely, supercritical fluid extraction carried out under conditions pressure: 330 bar, temperature: 40 ̊ C and co-solvent: 10% ethanol. The obtained extract was observed to contain 47.3±1.06 mg/ml rutin equivalents as total flavonoids. Mass spectral analysis revealed the prevalence of polymethoxyflavones (PMFs), chiefly tangeretin and nobiletin. Furthermore, the antioxidant potential was analyzed by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method, which was estimated to be at an IC₅₀ of 0.55μg/ml. The pre-systemic metabolism of flavonoids limits their functionality, as was observed in this study through in vitro gastrointestinal studies where nearly 50.0% of the flavonoids were degraded within 2 hours of gastric exposure. We proposed nanoencapsulation as a means to overcome this problem, and flavonoids-laden polylactic-co-glycolic acid (PLGA) nano encapsulates were bioengineered using solvent evaporation method, and these were furnished to a particle size between 200-250nm, which exhibited protection of flavonoids in the gastric environment, allowing only 20% to be released in 2h. A further step involved impregnating the nano encapsulates within alginate hydrogels which were fabricated by ionic cross-linking, which would act as delivery vehicles within the gastrointestinal (GI) tract. As a result, 100% protection was achieved from the pre-systemic release of bioflavonoids. These alginate hydrogels had key significant features, i.e., less porosity of nearly 20.0%, and Cryo-SEM (Cryo-scanning electron microscopy) images of the composite corroborate the packing ability of the alginate hydrogel. As a result of this work, it is concluded that the waste can be used to develop functional biomaterials while retaining the functionality of the bioactive itself.

Keywords: bioflavonoids, gastrointestinal, hydrogels, mandarins

Procedia PDF Downloads 56
263 Design of Nano-Reinforced Carbon Fiber Reinforced Plastic Wheel for Lightweight Vehicles with Integrated Electrical Hub Motor

Authors: Davide Cocchi, Andrea Zucchelli, Luca Raimondi, Maria Brugo Tommaso

Abstract:

The increasing attention is given to the issues of environmental pollution and climate change is exponentially stimulating the development of electrically propelled vehicles powered by renewable energy, in particular, the solar one. Given the small amount of solar energy that can be stored and subsequently transformed into propulsive energy, it is necessary to develop vehicles with high mechanical, electrical and aerodynamic efficiencies along with reduced masses. The reduction of the masses is of fundamental relevance especially for the unsprung masses, that is the assembly of those elements that do not undergo a variation of their distance from the ground (wheel, suspension system, hub, upright, braking system). Therefore, the reduction of unsprung masses is fundamental in decreasing the rolling inertia and improving the drivability, comfort, and performance of the vehicle. This principle applies even more in solar propelled vehicles, equipped with an electric motor that is connected directly to the wheel hub. In this solution, the electric motor is integrated inside the wheel. Since the electric motor is part of the unsprung masses, the development of compact and lightweight solutions is of fundamental importance. The purpose of this research is the design development and optimization of a CFRP 16 wheel hub motor for solar propulsion vehicles that can carry up to four people. In addition to trying to maximize aspects of primary importance such as mass, strength, and stiffness, other innovative constructive aspects were explored. One of the main objectives has been to achieve a high geometric packing in order to ensure a reduced lateral dimension, without reducing the power exerted by the electric motor. In the final solution, it was possible to realize a wheel hub motor assembly completely comprised inside the rim width, for a total lateral overall dimension of less than 100 mm. This result was achieved by developing an innovative connection system between the wheel and the rotor with a double purpose: centering and transmission of the driving torque. This solution with appropriate interlocking noses allows the transfer of high torques and at the same time guarantees both the centering and the necessary stiffness of the transmission system. Moreover, to avoid delamination in critical areas, evaluated by means of FEM analysis using 3D Hashin damage criteria, electrospun nanofibrous mats have been interleaved between CFRP critical layers. In order to reduce rolling resistance, the rim has been designed to withstand high inflation pressure. Laboratory tests have been performed on the rim using the Digital Image Correlation technique (DIC). The wheel has been tested for fatigue bending according to E/ECE/324 R124e.

Keywords: composite laminate, delamination, DIC, lightweight vehicle, motor hub wheel, nanofiber

Procedia PDF Downloads 188
262 Effective Service Provision and Multi-Agency Working in Service Providers for Children and Young People with Special Educational Needs and Disabilities: A Mixed Methods Systematic Review

Authors: Natalie Tyldesley-Marshall, Janette Parr, Anna Brown, Yen-Fu Chen, Amy Grove

Abstract:

It is widely recognised in policy and research that the provision of services for children and young people (CYP) with Special Educational Needs and Disabilities (SEND) is enhanced when health and social care, and education services collaborate and interact effectively. In the UK, there have been significant changes to policy and provisions which support and improve collaboration. However, professionals responsible for implementing these changes face multiple challenges, including a lack of specific implementation guidance or framework to illustrate how effective multi-agency working could or should work. This systematic review will identify the key components of effective multi-agency working in services for CYP with SEND; and the most effective forms of partnership working in this setting. The review highlights interventions that lead to service improvements; and the conditions in the local area that support and encourage success. A protocol was written and registered with PROSPERO registration: CRD42022352194. Searches were conducted on several health, care, education, and applied social science databases from the year 2012 onwards. Citation chaining has been undertaken, as well as broader grey literature searching to enrich the findings. Qualitative, quantitative, mixed methods studies and systematic reviews were included, assessed independently, and critically appraised or assessed for risk of bias using appropriate tools based on study design. Data were extracted in NVivo software and checked by a more experienced researcher. A convergent segregated approach to synthesis and integration was used in which the quantitative and qualitative data were synthesised independently and then integrated using a joint display integration matrix. Findings demonstrate the key ingredients for effective partnership working for services delivering SEND. Interventions deemed effective are described, and lessons learned across interventions are summarised. Results will be of interest to educators and health and social care professionals that provide services to those with SEND. These will also be used to develop policy recommendations for how UK healthcare, social care, and education services for CYP with SEND aged 0-25 can most effectively collaborate and achieve service improvement. The review will also identify any gaps in the literature to recommend areas for future research. Funding for this review was provided by the Department for Education.

Keywords: collaboration, joint commissioning, service delivery, service improvement

Procedia PDF Downloads 78
261 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 305
260 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 100
259 Construal Level Perceptions of Environmental vs. Social Sustainability in Online Fashion Shopping Environments

Authors: Barbara Behre, Verolien Cauberghe, Dieneke Van de Sompel

Abstract:

Sustainable consumption is on the rise, yet it has still not entered the mainstream in several industries, such as the fashion industry. In online fashion contexts, sustainability cues have been used to signal the sustainable benefits of certain garments to promote sustainable consumption. These sustainable cues may focus on the ecological or social dimension of sustainability. Since sustainability, in general, relates to distant, abstract benefits, the current study aims to examine if and how psychological distance may mediate the effects of exposure to different sustainability cues on consumption outcomes. Following the framework of Construal Level Theory of Psychological Distance, reduced psychological distance renders the construal level more concrete, which may influence attitudes and subsequent behavior in situations like fashion shopping. Most studies investigated sustainability as a composite, failing to differentiate between ecological and societal aspects of sustainability. The few studies examining sustainability more in detail uncovered that environmental sustainability is rather perceived in abstract cognitive construal, whereas social sustainability is linked to concrete construal. However, the construal level affiliation of the sustainability dimensions likely is not universally applicable to different domains and stages of consumption, which further suggest a need to clarify the relationships between environmental and social sustainability dimensions and the construal level of psychological distance within fashion brand consumption. While psychological distance and construal level have been examined in the context of sustainability, these studies yielded mixed results. The inconsistent findings of past studies might be due to the context-dependence of psychological distance as inducing construal differently in diverse situations. Especially in a hedonic consumption context like online fashion shopping, the role of visual processing of information could determine behavioural outcomes as linked to situational construal. Given the influence of the mode of processing on psychological distance and construal level, the current study examines the moderating role of verbal versus non-verbal presentation of the sustainability cues. In a 3 (environmental sustainability vs. social sustainability vs. control) x 2 (non-verbal message vs. verbal message) between subjects experiment, the present study thus examines how consumers evaluate sustainable brands in online shopping contexts in terms of psychological distance and construal level, as well as the impact on brand attitudes and buying intentions. The results among 246 participants verify the differential impact of the sustainability dimensions on fashion brand purchase intent as mediated by construal level and perceived psychological distance. The ecological sustainability cue is perceived as more concrete, which might be explained by consumer bias induced by the predominance of pro-environmental sustainability messages. The verbal versus non-verbal presentation of the sustainability cue neither had a significant influence on distance perceptions and construal level nor on buying intentions. This study offers valuable contributions to the sustainable consumption literature, as well as a theoretical basis for construal-level framing as applied in sustainable fashion branding.

Keywords: construal level theory, environmental vs social sustainability, online fashion shopping, sustainable fashion

Procedia PDF Downloads 81
258 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 100
257 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 131
256 Technology and the Need for Integration in Public Education

Authors: Eric Morettin

Abstract:

Cybersecurity and digital literacy are pressing issues among Canadian citizens, yet formal education does not provide today’s students with the necessary knowledge and skills needed to adapt to these challenging issues within the physical and digital labor-market. Canada’s current education systems do not highlight the importance of these respective fields, aside from using technology for learning management systems and alternative methods of assignment completion. Educators are not properly trained to integrate technology into the compulsory courses within public education, to better prepare their learners in these topics and Canada’s digital economy. ICTC addresses these gaps in education and training through cross-Canadian educational programming in digital literacy and competency, cybersecurity and coding which is bridged with Canada’s provincially regulated K-12 curriculum guidelines. After analyzing Canada’s provincial education, it is apparent that there are gaps in learning related to technology, as well as inconsistent educational outcomes that do not adequately represent the current Canadian and global economies. Presently only New Brunswick, Nova Scotia, Ontario, and British Columbia offer curriculum guidelines for cybersecurity, computer programming, and digital literacy. The remaining provinces do not address these skills in their curriculum guidelines. Moreover, certain courses across some provinces not being updated since the 1990’s. The three territories respectfully take curriculum strands from other provinces and use them as their foundation in education. Yukon uses all British Columbia curriculum. Northwest Territories and Nunavut respectfully use a hybrid of Alberta and Saskatchewan curriculum as their foundation of learning. Education that is provincially regulated does not allow for consistency across the country’s educational outcomes and what Canada’s students will achieve – especially when curriculum outcomes have not been updated to reflect present day society. Through this, ICTC has aligned Canada’s provincially regulated curriculum and created opportunities for focused education in the realm of technology to better serve Canada’s present learners and teachers; while addressing inequalities and applicability within curriculum strands and outcomes across the country. As a result, lessons, units, and formal assessment strategies, have been created to benefit students and teachers in this interdisciplinary, cross-curricular, practice - as well as meeting their compulsory education requirements and developing skills and literacy in cyber education. Teachers can access these lessons and units through ICTC’s website, as well as receive professional development regarding the assessment and implementation of these offerings from ICTC’s education coordinators, whose combines experience exceeds 50 years of teaching in public, private, international, and Indigenous schools. We encourage you to take this opportunity that will benefit students and educators, and will bridge the learning and curriculum gaps in Canadian education to better reflect the ever-changing public, social, and career landscape that all citizens are a part of. Students are the future, and we at ICTC strive to ensure their futures are bright and prosperous.

Keywords: cybersecurity, education, curriculum, teachers

Procedia PDF Downloads 55
255 Mapping the Urban Catalytic Trajectory for 'Convention and Exhibition' Projects: A Case of India International Convention and Expo Centre, New Delhi

Authors: Bhavana Gulaty, Arshia Chaudhri

Abstract:

Great civic projects contribute integrally to a city, and every city undergoes a recurring cycle of urban transformations and regeneration by their insertion. The M.I.C.E. (Meetings, Incentives, Convention and Exhibitions) industry is the forbearer of one category of such catalytic civic projects. Through a specific focus on M.I.C.E. destinations, this paper illustrates the multifarious dimensions that urban catalysts impact the city on S.P.U.R. (Seed. Profile. Urbane. Reflections), the theoretical framework of this paper aims to unearth these dimensions in the realm of the COEX (Convention & Exhibition) biosphere. The ‘COEX Biosphere’ is the filter of such catalysts being ecosystems unto themselves. Like a ripple in water, the impact of these strategic interventions focusing on art, culture, trade, and promotion expands right from the trigger; the immediate context to the region and subsequently impacts the global scale. These ripples are known to bring about significant economic, social, and political and network changes. The COEX inventory in the Asian context has one such prominent addition; the proposed India International Convention and Exhibition Centre (IICC) at New Delhi. It is envisioned to be the largest facility in Asia currently and would position India on the global M.I.C.E map. With the first phase of the project scheduled to open for use in the end of 2019, this flagship project of the Government of India is projected to cater to a peak daily footfall of 3,20,000 visitors and estimated to generate 5,00,000 jobs. While the economic benefits are yet to manifest in real time and ‘Good design is good business’ holds true, for the urban transformation to be meaningful, the benefits have to go beyond just a balance sheet for the city’s exchequer. This aspect has been found relatively unexplored in research on these developments. The methodology for investigation will comprise of two steps. The first will be establishing an inventory of the global success stories and associated benefits of COEX projects over the past decade. The rationale for capping the timeframe is the significant paradigm shift that has been observed in their recent conceptualization; for instance ‘Innovation Districts’ conceptualised in the city of Albuquerque that converges into the global economy. The second step would entail a comparative benchmarking of the projected transformations by IICC through a toolkit of parameters. This is posited to yield a matrix that can form the test bed for mapping the catalytic trajectory for projects in the pipeline globally. As a ready reckoner, it purports to be a catalyst to substantiate decision making in the planning stage itself for future projects in similar contexts.

Keywords: catalysts, COEX, M.I.C.E., urban transformations

Procedia PDF Downloads 132
254 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach

Authors: Niyongabo Elyse

Abstract:

Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.

Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling

Procedia PDF Downloads 30
253 Leadership Education for Law Enforcement Mid-Level Managers: The Mediating Role of Effectiveness of Training on Transformational and Authentic Leadership Traits

Authors: Kevin Baxter, Ron Grove, James Pitney, John Harrison, Ozlem Gumus

Abstract:

The purpose of this research is to determine the mediating effect of effectiveness of the training provided by Northwestern University’s School of Police Staff and Command (SPSC), on the ability of law enforcement mid-level managers to learn transformational and authentic leadership traits. This study will also evaluate the leadership styles, of course, graduates compared to non-attendees using a static group comparison design. The Louisiana State Police pay approximately $40,000 in salary, tuition, housing, and meals for each state police lieutenant attending the 10-week program of the SPSC. This school lists the development of transformational leaders as an increasing element. Additionally, the SPSC curriculum addresses all four components of authentic leadership - self-awareness, transparency, ethical/moral, and balanced processing. Upon return to law enforcement in roles of mid-level management, there are questions as to whether or not students revert to an “autocratic” leadership style. Insufficient evidence exists to support claims for the effectiveness of management training or leadership development. Though it is widely recognized that transformational styles are beneficial to law enforcement, there is little evidence that suggests police leadership styles are changing. Police organizations continue to hold to a more transactional style (i.e., most senior police leaders remain autocrats). Additionally, research in the application of transformational, transactional, and laissez-faire leadership related to police organizations is minimal. The population of the study is law enforcement mid-level managers from various states within the United States who completed leadership training presented by the SPSC. The sample will be composed of 66 active law enforcement mid-level managers (lieutenants and captains) who have graduated from SPSC and 65 active law enforcement mid-level managers (lieutenants and captains) who have not attended SPSC. Participants will answer demographics questions, Multifactor Leadership Questionnaire, Authentic Leadership Questionnaire, and the Kirkpatrick Hybrid Evaluation Survey. Analysis from descriptive statistics, group comparison, one-way MANCOVA, and the Kirkpatrick Evaluation Model survey will be used to determine training effectiveness in the four levels of reaction, learning, behavior, and results. Independent variables are SPSC graduates (two groups: upper and lower) and no-SPSC attendees, and dependent variables are transformational and authentic leadership scores. SPSC graduates are expected to have higher MLQ scores for transformational leadership traits and higher ALQ scores for authentic leadership traits than SPSC non-attendees. We also expect the graduates to rate the efficacy of SPSC leadership training as high. This study will validate (or invalidate) the benefits, costs, and resources required for leadership development from a nationally recognized police leadership program, and it will also help fill the gap in the literature that exists between law enforcement professional development and transformational and authentic leadership styles.

Keywords: training effectiveness, transformational leadership, authentic leadership, law enforcement mid-level manager

Procedia PDF Downloads 84
252 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches

Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm

Abstract:

Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.

Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing

Procedia PDF Downloads 135
251 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 13
250 Ensemble of Misplacement, Juxtaposing Feminine Identity in Time and Space: An Analysis of Works of Modern Iranian Female Photographers

Authors: Delaram Hosseinioun

Abstract:

In their collections, Shirin Neshat, Mitra Tabrizian, Gohar Dashti and Newsha Tavakolian adopt a hybrid form of narrative to confront the restrictions imposed on women in hegemonic public and private spaces. Focusing on motives such as social marginalisation, crisis of belonging, as well as lack of agency for women, the artists depict the regression of women’s rights in their respective generations. Based on the ideas of Michael Bakhtin, namely his concept of polyphony or the plurality of contradictory voices, the views of Judith Butler on giving an account to oneself and Henri Leverbre’s theories on social space, this study illustrates the artists’ concept of identity in crisis through time and space. The research explores how the artists took their art as a novel dimension to depict and confront the hardships imposed on Iranian women. Henri Lefebvre makes a distinction between complex social structures through which individuals situate, perceive and represent themselves. By adding Bakhtin’s polyphonic view to Lefebvre’s concepts of perceived and lived spaces, the study explores the sense of social fragmentation in the works of Dashti and Tavakolian. One argument is that as the representatives of the contemporary generation of female artists who spend their lives in Iran and faced a higher degree of restrictions, their hyperbolic and theatrical styles stand as a symbolic act of confrontation against restrictive socio-cultural norms imposed on women. Further, the research explores the possibility of reclaiming one's voice and sense of agency through art, corresponding with the Bakhtinian sense of polyphony and Butler’s concept of giving an account to oneself. Works of Neshat and Tabrizian as the representatives of the previous generation who faced exile and diaspora, encompass a higher degree of misplacement, violence and decay of women’s presence. In Their works, the women’s body encompasses Lefebvre’s dismantled temporal and special setting. Notably, the ongoing social conviction and gender-based dogma imposed on women frame some of the concurrent motives among the selected collections of the four artists. By applying an interdisciplinary lens and integrating the conducted interviews with the artists, the study illustrates how the artists seek a transcultural account for themselves and women in their generations. Further, the selected collections manifest the urgency for an authentic and liberal voice and setting for women, resonating with the concurrent Women, Life, Freedom movement in Iran.

Keywords: persian modern female photographers, transcultural studies, shirin neshat, mitra tabrizian, gohar dashti, newsha tavakolian, butler, bakhtin, lefebvre

Procedia PDF Downloads 50
249 Organic Matter Distribution in Bazhenov Source Rock: Insights from Sequential Extraction and Molecular Geochemistry

Authors: Margarita S. Tikhonova, Alireza Baniasad, Anton G. Kalmykov, Georgy A. Kalmykov, Ralf Littke

Abstract:

There is a high complexity in the pore structure of organic-rich rocks caused by the combination of inter-particle porosity from inorganic mineral matter and ultrafine intra-particle porosity from both organic matter and clay minerals. Fluids are retained in that pore space, but there are major uncertainties in how and where the fluids are stored and to what extent they are accessible or trapped in 'closed' pores. A large degree of tortuosity may lead to fractionation of organic matter so that the lighter and flexible compounds would diffuse to the reservoir whereas more complicated compounds may be locked in place. Additionally, parts of hydrocarbons could be bound to solid organic matter –kerogen– and mineral matrix during expulsion and migration. Larger compounds can occupy thin channels so that clogging or oil and gas entrapment will occur. Sequential extraction of applying different solvents is a powerful tool to provide more information about the characteristics of trapped organic matter distribution. The Upper Jurassic – Lower Cretaceous Bazhenov shale is one of the most petroliferous source rock extended in West Siberia, Russia. Concerning the variable mineral composition, pore space distribution and thermal maturation, there are high uncertainties in distribution and composition of organic matter in this formation. In order to address this issue geological and geochemical properties of 30 samples including mineral composition (XRD and XRF), structure and texture (thin-section microscopy), organic matter contents, type and thermal maturity (Rock-Eval) as well as molecular composition (GC-FID and GC-MS) of different extracted materials during sequential extraction were considered. Sequential extraction was performed by a Soxhlet apparatus using different solvents, i.e., n-hexane, chloroform and ethanol-benzene (1:1 v:v) first on core plugs and later on pulverized materials. The results indicate that the studied samples are mainly composed of type II kerogen with TOC contents varied from 5 to 25%. The thermal maturity ranged from immature to late oil window. Whereas clay contents decreased with increasing maturity, the amount of silica increased in the studied samples. According to molecular geochemistry, stored hydrocarbons in open and closed pore space reveal different geochemical fingerprints. The results improve our understanding of hydrocarbon expulsion and migration in the organic-rich Bazhenov shale and therefore better estimation of hydrocarbon potential for this formation.

Keywords: Bazhenov formation, bitumen, molecular geochemistry, sequential extraction

Procedia PDF Downloads 145
248 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach

Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere

Abstract:

Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.

Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint

Procedia PDF Downloads 104
247 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 14
246 Recent Advances in the Valorization of Goat Milk: Nutritional Properties and Production Sustainability

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Goat dairy products are gaining popularity worldwide. In developing countries, but also in many marginal regions of the Mediterranean area, goats represent a great part of the economy and ensure food security. In fact, these small ruminants are able to convert efficiently poor weedy plants and small trees into traditional products of high nutritional quality, showing great resilience to different climatic and environmental conditions. In developed countries, goat milk is appreciated for the presence of health-promoting compounds, bioactive compounds such as conjugated linoleic acids, oligosaccharides, sphingolipids and polyammines. This paper focuses on the recent advances in literature on the nutritional properties of goat milk and on innovative techniques to improve its quality as to become a promising functional food. The environmental sustainability of different methodologies of production has also been examined. Goat milk is valued today as a food of high nutritional value and functional properties as well as small environmental footprint. It is widely consumed in many countries due to high nutritional value, lower allergenic potential, and better digestibility when compared to bovine milk, that makes this product suitable for infants, elderly or sensitive patients. The main differences in chemical composition between a cow and goat milk rely on fat globules that in goat milk are smaller and in fatty acids that present a smaller chain length, while protein, fat, and lactose concentration are comparable. Milk nutritional properties have demonstrated to be strongly influenced by animal diet, genotype, and welfare, but also by season and production systems. Furthermore, there is a growing interest in the dairy industry in goat milk for its relatively high concentration of prebiotics and a good amount of probiotics, which have recently gained importance for their therapeutic potential. Therefore, goat milk is studied as a promising matrix to develop innovative functional foods. In addition to the economic and nutritional value, goat milk is considered a sustainable product for its small environmental footprint, as they require relatively little water and land, and less medical treatments, compared to cow, these characteristics make its production naturally vocated to organic farming. Organic goat milk production has becoming more and more interesting both for farmers and consumers as it can answer to several concerns like environment protection, animal welfare and economical sustainment of rural populations living in marginal lands. These evidences make goat milk an ancient food with novel properties and advantages to be valorized and exploited.

Keywords: goat milk, nutritional quality, bioactive compounds, sustainable production, animal welfare

Procedia PDF Downloads 123
245 Case Study of Migrants, Cultures and Environmental Crisis

Authors: Christina Y. P. Ting

Abstract:

Migration is a global phenomenon with movements of migrants from developed and developing countries to the host societies. Migrants have changed the host countries’ demography – its population structure and also its ethnic cultural diversity. Acculturation of migrants in terms of their adoption of the host culture is seen as important to ensure that they ‘fit into’ their adopted country so as to participate in everyday public life. However, this research found that the increase of the China-born migrants’ post-migration consumption level had impact on Australia’s environment reflected not only because of their adoption of elements of the host culture, but also retention of aspects of Chinese culture – indicating that the influence of bi-culturalism was in operation. This research, which was based on the face-to-face interview with 61 China-born migrants in the suburb of Box Hill, Melbourne, investigated the pattern of change in the migrants’ consumption upon their settlement in Australia. Using an ecological footprint calculator, their post-migration footprints were found to be larger than pre-migration footprint. The uniquely-derived CALD (Culturally and Linguistically Diverse) Index was used to measure individuals’ strength of connectedness to ethnic culture. Multi-variant analysis was carried out to understand which independent factors that influence consumption best explain the change in footprint (which is the difference between pre-and post-migration footprints, as a dependent factor). These independent factors ranged from socio-economic and demographics to the cultural context, that is, the CALD Index and indicators of acculturation. The major findings from the analysis were: Chinese culture (as measured by the CALD Index) and indicators of acculturation such as length of residency and using English in communications besides the traditional factors such as age, income and education level made significant contributions to the large increase in the China-born group’s post-migration consumption level. This paper as part of a larger study found that younger migrants’ large change in their footprint were related to high income and low level of education. This group of migrants also practiced bi-cultural consumption in retaining ethnic culture and adopting the host culture. These findings have importantly highlighted that for a host society to tackle environmental crisis, governments need not only to understand the relationship between age and consumption behaviour, but also to understand and embrace the migrants’ ethnic cultures, which may act as bridges and/or fences in relationships. In conclusion, for governments to deal with national issues such as environmental crisis within a cultural diverse population, it necessitates an understanding of age and aspects of ethnic culture that may act as bridges and fences. This understanding can aid in putting in place policies that enable the co-existence of a hybrid of the ethnic and host cultures in order to create and maintain a harmonious and secured living environment for population groups.

Keywords: bicultural consumer, CALD index, consumption, ethnic culture, migrants

Procedia PDF Downloads 217
244 An Unusual Case of Wrist Pain: Idiopathic Avascular Necrosis of the Scaphoid, Preiser’s Disease

Authors: Adae Amoako, Daniel Montero, Peter Murray, George Pujalte

Abstract:

We present a case of a 42-year-old, right-handed Caucasian male who presented to a medical orthopedics clinic with left wrist pain. The patient indicated that the pain started two months prior to the visit. He could only remember helping a friend move furniture prior to the onset of pain. Examination of the left wrist showed limited extension compared to the right. There was clicking with flexion and extension of the wrist on the dorsal aspect. Mild tenderness was noticed over the distal radioulnar joint. There was ulnar and radial deviation on provocation. Initial 4-view x-rays of the left wrist showed mild radiocarpal and scapho-trapezium-trapezoid (ST-T) osteoarthritis, with subchondral cysts seen in the lunate and scaphoid, with no obvious fractures. The patient was initially put in a wrist brace and diclofenac topical gel was prescribed for pain control, as a patient could not take non-steroidal anti-inflammatory drugs (NSAIDs) due to gastritis. Despite diclofenac topical gel use and bracing, symptoms remained, and a steroid injection with 1 mL of lidocaine with 10 mg of triamcinolone acetonide was performed under fluoroscopy. He obtained some relief but after 3 months, the injection had to be repeated. On 2-month follow up after the initial evaluation, symptoms persisted. Magnetic resonance imaging (MRI) was obtained which showed an abnormal T1 hypodense signal involving the proximal pole of the scaphoid and articular collapse proximally of the scaphoid, with marked irregularity of the overlying cartilage, suggesting a remote injury, findings consistent with avascular necrosis of the proximal pole of the scaphoid. A month after that, the patient had the left proximal pole of the scaphoid debrided and an intercompartmental supraretinacular artery vascularized. Pedicle bone graft reconstruction of the proximal pole of the left scaphoid was done. A non-vascularized autograft from the left radius was also applied. He was put in a thumb spica cast with the interphalangeal joint free for 6 weeks. On 6-week follow-up after surgery, the patient was healing well and could make a composite fist with his left hand. The diagnosis of Preiser’s disease is primarily based on radiological findings. Due to the fact that necrosis happens over a period of time, most AVNs are diagnosed at the late stages of the disease. There appear to be no specific guidelines on the management AVN of the scaphoid. In the past, immobilization and arthroscopic debridement had been used. Radial osteotomy has also been tried. Vascularized bone grafts have also been used to treat Preiser’s disease. In our patient, we used three of these treatment modalities, starting with conservative management with topical NSAIDS and immobilization, then debridement with vascularized bone grafts.

Keywords: wrist pain, avascular necrosis of the scaphoid, Preiser’s disease, vascularized bone grafts

Procedia PDF Downloads 274
243 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 105
242 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 129
241 Diagenesis of the Permian Ecca Sandstones and Mudstones, in the Eastern Cape Province, South Africa: Implications for the Shale Gas Potential of the Karoo Basin

Authors: Temitope L. Baiyegunhi, Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava

Abstract:

Diagenesis is the most important factor that affects or impact the reservoir property. Despite the fact that published data gives a vast amount of information on the geology, sedimentology and lithostratigraphy of the Ecca Group in the Karoo Basin of South Africa, little is known of the diagenesis of the potentially feasible shales and sandstones of the Ecca Group. The study aims to provide a general account of the diagenesis of sandstones and mudstone of the Ecca Group. Twenty-five diagenetic textures and structures are identified and grouped into three regimes or stages that include eogenesis, mesogenesis and telogenesis. Clay minerals are the most common cementing materials in the Ecca sandstones and mudstones. Smectite, kaolinite and illite are the major clay minerals that act as pore lining rims and pore-filling cement. Most of the clay minerals and detrital grains were seriously attacked and replaced by calcite. Calcite precipitates locally in pore spaces and partly or completely replaced feldspar and quartz grains, mostly at their margins. Precipitation of cements and formation of pyrite and authigenic minerals as well as little lithification occurred during the eogenesis. This regime was followed by mesogenesis which brought about an increase in tightness of grain packing, loss of pore spaces and thinning of beds due to weight of overlying sediments and selective dissolution of framework grains. Compaction, mineral overgrowths, mineral replacement, clay-mineral authigenesis, deformation and pressure solution structures occurred during mesogenesis. During rocks were uplifted, weathered and unroofed by erosion, this resulted in additional grain fracturing, decementation and oxidation of iron-rich volcanic fragments and ferromagnesian minerals. The rocks of Ecca Group were subjected to moderate-intense mechanical and chemical compaction during its progressive burial. Intergranular pores, matrix micro pores, secondary intragranular, dissolution and fractured pores are the observed pores. The presence of fractured and dissolution pores tend to enhance reservoir quality. However, the isolated nature of the pores makes them unfavourable producers of hydrocarbons, which at best would require stimulation. The understanding of the space and time distribution of diagenetic processes in these rocks will allow the development of predictive models of their quality, which may contribute to the reduction of risks involved in their exploration.

Keywords: diagenesis, reservoir quality, Ecca Group, Karoo Supergroup

Procedia PDF Downloads 123
240 Toward Understanding the Glucocorticoid Receptor Network in Cancer

Authors: Swati Srivastava, Mattia Lauriola, Yuval Gilad, Adi Kimchi, Yosef Yarden

Abstract:

The glucocorticoid receptor (GR) has been proposed to play important, but incompletely understood roles in cancer. Glucocorticoids (GCs) are widely used as co-medication of various carcinomas, due to their ability to reduce the toxicity of chemotherapy. Furthermore, GR antagonism has proven to be a strategy to treat triple negative breast cancer and castration-resistant prostate cancer. These observations suggest differential GR involvement in cancer subtypes. The goal of our study has been to elaborate the current understanding of GR signaling in tumor progression and metastasis. Our study involves two cellular models, non-tumorigenic breast epithelial cells (MCF10A) and Ewing sarcoma cells (CHLA9). In our breast cell model, the results indicated that the GR agonist dexamethasone inhibits EGF-induced mammary cell migration, and this effect was blocked when cells were stimulated with a GR antagonist, namely RU486. Microarray analysis for gene expression revealed that the mechanism underlying inhibition involves dexamenthasone-mediated repression of well-known activators of EGFR signaling, alongside with enhancement of several EGFR’s negative feedback loops. Because GR mainly acts primarily through composite response elements (GREs), or via a tethering mechanism, our next aim has been to find the transcription factors (TFs) which can interact with GR in MCF10A cells.The TF-binding motif overrepresented at the promoter of dexamethasone-regulated genes was predicted by using bioinformatics. To validate the prediction, we performed high-throughput Protein Complementation Assays (PCA). For this, we utilized the Gaussia Luciferase PCA strategy, which enabled analysis of protein-protein interactions between GR and predicted TFs of mammary cells. A library comprising both nuclear receptors (estrogen receptor, mineralocorticoid receptor, GR) and TFs was fused to fragments of GLuc, namely GLuc(1)-X, X-GLuc(1), and X-GLuc(2), where GLuc(1) and GLuc(2) correspond to the N-terminal and C-terminal fragments of the luciferase gene.The resulting library was screened, in human embryonic kidney 293T (HEK293T) cells, for all possible interactions between nuclear receptors and TFs. By screening all of the combinations between TFs and nuclear receptors, we identified several positive interactions, which were strengthened in response to dexamethasone and abolished in response to RU486. Furthermore, the interactions between GR and the candidate TFs were validated by co-immunoprecipitation in MCF10A and in CHLA9 cells. Currently, the roles played by the uncovered interactions are being evaluated in various cellular processes, such as cellular proliferation, migration, and invasion. In conclusion, our assay provides an unbiased network analysis between nuclear receptors and other TFs, which can lead to important insights into transcriptional regulation by nuclear receptors in various diseases, in this case of cancer.

Keywords: epidermal growth factor, glucocorticoid receptor, protein complementation assay, transcription factor

Procedia PDF Downloads 204
239 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 18
238 Towards Consensus: Mapping Humanitarian-Development Integration Concepts and Their Interrelationship over Time

Authors: Matthew J. B. Wilson

Abstract:

Disaster Risk Reduction relies heavily on the effective cooperation of both humanitarian and development actors, particularly in the wake of a disaster, implementing lasting recovery measures that better protect communities from disasters to come. This can be seen to fit within a broader discussion around integrating humanitarian and development work stretching back to the 1980s. Over time, a number of key concepts have been put forward, including Linking Relief, Rehabilitation, and Development (LRRD), Early Recovery (ER), ‘Build Back Better’ (BBB), and the most recent ‘Humanitarian-Development-Peace Nexus’ or ‘Triple Nexus’ (HDPN) to define these goals and relationship. While this discussion has evolved greatly over time, from a continuum to a more integrative synergistic relationship, there remains a lack of consensus around how to describe it, and as such, the reality of effectively closing this gap has yet to be seen. The objective of this research was twofold. First, to map these four identified concepts (LRRD, ER, BBB & HDPN) used in the literature since 1995 to understand the overall trends in how this relationship is discussed. Second, map articles reference a combination of these concepts to understand their interrelationship. A scoping review was conducted for each concept identified. Results were gathered from Google Scholar by firstly inputting specific boolean search phrases for each concept as they related specifically to disasters each year since 1995 to identify the total number of articles discussing each concept over time. A second search was then done by pairing concepts together within a boolean search phrase and inputting the results into a matrix to understand how many articles contained references to more than one of the concepts. This latter search was limited to articles published after 2017 to account for the more recent emergence of HDPN. It was found that ER and particularly BBB are referred to much more widely than LRRD and HDPN. ER increased particularly in the mid-2000’s coinciding with the formation of the ER cluster, and BBB, whilst emerging gradually in the mid-2000s due to its usage in the wake of the Boxing Day Tsunami, increased significantly from about 2015 after its prominent inclusion in Sendai Framework. HDPN has only just started to increase in the last 4-5 years. In regards to the relationship between concepts, it was found the vast majority of all concepts identified were referred to in isolation from each other. The strongest relationship was between LRRD and HDPN (8% of articles referring to both), whilst ER-BBB and ER-HDPN both were about 3%, LRRD-ER 2%, and BBB-HDPN 1% and BBB-LRRD 1%. This research identified a fundamental issue around the lack of consensus and even awareness of different approaches referred to within academic literature relating to integrating humanitarian and development work. More research into synthesizing and learning from a range of approaches could work towards better closing this gap.

Keywords: build back better, disaster risk reduction, early recovery, linking relief rehabilitation and development, humanitarian development integration, humanitarian-development (peace) nexus, recovery, triple nexus

Procedia PDF Downloads 54
237 Impact of Chess Intervention on Cognitive Functioning of Children

Authors: Ebenezer Joseph

Abstract:

Chess is a useful tool to enhance general and specific cognitive functioning in children. The present study aims to assess the impact of chess on cognitive in children and to measure the differential impact of socio-demographic factors like age and gender of the child on the effectiveness of the chess intervention.This research study used an experimental design to study the impact of the Training in Chess on the intelligence of children. The Pre-test Post-test Control Group Design was utilized. The research design involved two groups of children: an experimental group and a control group. The experimental group consisted of children who participated in the one-year Chess Training Intervention, while the control group participated in extra-curricular activities in school. The main independent variable was training in chess. Other independent variables were gender and age of the child. The dependent variable was the cognitive functioning of the child (as measured by IQ, working memory index, processing speed index, perceptual reasoning index, verbal comprehension index, numerical reasoning, verbal reasoning, non-verbal reasoning, social intelligence, language, conceptual thinking, memory, visual motor and creativity). The sample consisted of 200 children studying in Government and Private schools. Random sampling was utilized. The sample included both boys and girls falling in the age range 6 to 16 years. The experimental group consisted of 100 children (50 from Government schools and 50 from Private schools) with an equal representation of boys and girls. The control group similarly consisted of 100 children. The dependent variables were assessed using Binet-Kamat Test of Intelligence, Wechsler Intelligence Scale for Children - IV (India) and Wallach Kogan Creativity Test. The training methodology comprised Winning Moves Chess Learning Program - Episodes 1–22, lectures with the demonstration board, on-the-board playing and training, chess exercise through workbooks (Chess school 1A, Chess school 2, and tactics) and working with chess software. Further students games were mapped using chess software and the brain patterns of the child were understood. They were taught the ideas behind chess openings and exposure to classical games were also given. The children participated in mock as well as regular tournaments. Preliminary analysis carried out using independent t tests with 50 children indicates that chess training has led to significant increases in the intelligent quotient. Children in the experimental group have shown significant increases in composite scores like working memory and perceptual reasoning. Chess training has significantly enhanced the total creativity scores, line drawing and pattern meaning subscale scores. Systematically learning chess as part of school activities appears to have a broad spectrum of positive outcomes.

Keywords: chess, intelligence, creativity, children

Procedia PDF Downloads 232