Search results for: essential point
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8877

Search results for: essential point

897 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud

Authors: Mokopane Charles Marakalala

Abstract:

Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.

Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting

Procedia PDF Downloads 95
896 Superoleophobic Nanocellulose Aerogel Membrance as Bioinspired Cargo Carrier on Oil by Sol-Gel Method

Authors: Zulkifli, I. W. Eltara, Anawati

Abstract:

Understanding the complementary roles of surface energy and roughness on natural nonwetting surfaces has led to the development of a number of biomimetic superhydrophobic surfaces, which exhibit apparent contact angles with water greater than 150 degrees and low contact angle hysteresis. However, superoleophobic surfaces—those that display contact angles greater than 150 degrees with organic liquids having appreciably lower surface tensions than that of water—are extremely rare. In addition to chemical composition and roughened texture, a third parameter is essential to achieve superoleophobicity, namely, re-entrant surface curvature in the form of overhang structures. The overhangs can be realized as fibers. Superoleophobic surfaces are appealing for example, antifouling, since purely superhydrophobic surfaces are easily contaminated by oily substances in practical applications, which in turn will impair the liquid repellency. On the other studied have demonstrate that such aqueous nanofibrillar gels are unexpectedly robust to allow formation of highly porous aerogels by direct water removal by freeze-drying, they are flexible, unlike most aerogels that suffer from brittleness, and they allow flexible hierarchically porous templates for functionalities, e.g. for electrical conductivity. No crosslinking, solvent exchange nor supercritical drying are required to suppress the collapse during the aerogel preparation, unlike in typical aerogel preparations. The aerogel used in current work is an ultralight weight solid material composed of native cellulose nanofibers. The native cellulose nanofibers are cleaved from the self-assembled hierarchy of macroscopic cellulose fibers. They have become highly topical, as they are proposed to show extraordinary mechanical properties due to their parallel and grossly hydrogen bonded polysaccharide chains. We demonstrate that superoleophobic nanocellulose aerogels coating by sol-gel method, the aerogel is capable of supporting a weight nearly 3 orders of magnitude larger than the weight of the aerogel itself. The load support is achieved by surface tension acting at different length scales: at the macroscopic scale along the perimeter of the carrier, and at the microscopic scale along the cellulose nanofibers by preventing soaking of the aerogel thus ensuring buoyancy. Superoleophobic nanocellulose aerogels have recently been achieved using unmodified cellulose nanofibers and using carboxy methylated, negatively charged cellulose nanofibers as starting materials. In this work, the aerogels made from unmodified cellulose nanofibers were subsequently treated with fluorosilanes. To complement previous work on superoleophobic aerogels, we demonstrate their application as cargo carriers on oil, gas permeability, plastrons, and drag reduction, and we show that fluorinated nanocellulose aerogels are high-adhesive superoleophobic surfaces. We foresee applications including buoyant, gas permeable, dirt-repellent coatings for miniature sensors and other devices floating on generic liquid surfaces.

Keywords: superoleophobic, nanocellulose, aerogel, sol-gel

Procedia PDF Downloads 347
895 The Usage of Negative Emotive Words in Twitter

Authors: Martina Katalin Szabó, István Üveges

Abstract:

In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.

Keywords: gender differences, negative emotive words, semantic changes over time, twitter

Procedia PDF Downloads 202
894 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems

Authors: Alejandro Adorjan

Abstract:

Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.

Keywords: calculus, engineering education, PreCalculus, Summer Program

Procedia PDF Downloads 287
893 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 40
892 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 91
891 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 148
890 A Study of a Diachronic Relationship between Two Weak Inflection Classes in Norwegian, with Emphasis on Unexpected Productivity

Authors: Emilija Tribocka

Abstract:

This contribution presents parts of an ongoing study of a diachronic relationship between two weak verb classes in Norwegian, the a-class (cf. the paradigm of ‘throw’: kasta – kastar – kasta – kasta) and the e-class (cf. the paradigm of ‘buy’: kjøpa – kjøper – kjøpte – kjøpt). The study investigates inflection class shifts between the two classes with Old Norse, the ancestor of Modern Norwegian, as a starting point. Examination of inflection in 38 verbs in four chosen dialect areas (106 places of attestations) demonstrates that the shifts from the a-class to the e-class are widespread to varying degrees in three out of four investigated areas and are more common than the shifts in the opposite direction. The diachronic productivity of the e-class is unexpected for several reasons. There is general agreement that type frequency is an important factor influencing productivity. The a-class (53% of all weak verbs) was more type frequent in Old Norse than the e-class (42% of all weak verbs). Thus, given the type frequency, the expansion of the e-class is unexpected. Furthermore, in the ‘core’ areas of expanded e-class inflection, the shifts disregard phonological principles creating forms with uncomfortable consonant clusters, e.g., fiskte instead of fiska, the preterit of fiska ‘fish’. Later on, these forms may be contracted, i.e., fiskte > fiste. In this contribution, two factors influencing the shifts are presented: phonological form and token frequency. Verbs with the stem ending in a consonant cluster, particularly when the cluster ends in -t, hardly ever shift to the e-class. As a matter of fact, verbs with this structure belonging to the e-class in Old Norse shift to the a-class in Modern Norwegian, e.g., ON e-class verb skipta ‘change’ shifts to the a-class. This shift occurs as a result of the lack of morpho-phonological transparency between the stem and the preterit suffix of the e-class, -te. As there is a phonological fusion between the stem ending in -t and the suffix beginning in -t, the transparent a-class inflection is chosen. Token frequency plays an important role in the shifts, too, in some dialects. In one of the investigated areas, the most token frequent verbs of the ON e-class remain in the e-class (e.g., høyra ‘hear’, leva ‘live’, kjøpa ‘buy’), while less frequent verbs may shift to the a-class. Furthermore, the results indicate that the shift from the a-class to the e-class occurs in some of the most token frequent verbs of the ON a-class in this area, e.g., lika ‘like’, lova ‘promise’, svara ‘answer’. The latter is unexpected as frequent items tend to remain stable. This study presents a case of unexpected productivity, demonstrating that minor patterns can grow and outdo major patterns. Thus, type frequency is not the only factor that determines productivity. The study addresses the role of phonological form and token frequency in the spread of inflection patterns.

Keywords: inflection class, productivity, token frequency, phonological form

Procedia PDF Downloads 59
889 Customized Temperature Sensors for Sustainable Home Appliances

Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy

Abstract:

Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.

Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency

Procedia PDF Downloads 68
888 Hydrodynamic Analysis of Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters

Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut

Abstract:

- In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.

Keywords: underwater vehicles, submarine, autonomous underwater vehicles, auv, computational fluid dynamics, flow fields, pressure, turbulence, drag

Procedia PDF Downloads 74
887 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?

Authors: H. M. Ross-McAlpine

Abstract:

Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.

Keywords: diplomacy, foreign policy, globalisation, small state

Procedia PDF Downloads 389
886 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 113
885 Effect of Supplementation with Fresh Citrus Pulp on Growth Performance, Slaughter Traits and Mortality in Guinea Pigs

Authors: Carlos Minguez, Christian F. Sagbay, Erika E. Ordoñez

Abstract:

Guinea pigs (Cavia porcellus) play prominent roles as experimental models for medical research and as pets. However, in developing countries like South America, the Philippines, and sub-Saharan Africa, the meat of guinea pigs is an economic source of animal protein for the poor and malnourished humans because guinea pigs are mainly fed with forage and do not compete directly with human beings for food resources, such as corn or wheat. To achieve efficient production of guinea pigs, it is essential to provide insurance against vitamin C deficiency. The objective of this research was to investigate the effect of the partial replacement of alfalfa with fresh citrus pulp (Citrus sinensis) in a diet of guinea pigs on the growth performance, slaughter traits and mortality during the fattening period (between 20 and 74 days of age). A total of 300 guinea pigs were housed in collective cages of about ten animals (2 x 1 x 0.4 m) and were distributed into two completely randomized groups. Guinea pigs in both groups were fed ad libitum, with a standard commercial pellet diet (10 MJ of digestible energy/kg, 17% crude protein, 11% crude fiber, and 4.5% crude fat). Control group was supplied with fresh alfalfa as forage. In the treatment group, 30% of alfalfa was replaced by fresh citrus pulp. Growth traits, including body weight (BW), average daily gain (ADG), feed intake (FI), and feed conversion ratio (FCR), were measured weekly. On day 74, the animals were slaughtered, and slaughter traits, including live weight at slaughter (LWS), full gastrointestinal tract weight (FGTW), hot carcass weight (with head; HCW), cold carcass weight (with head; CCW), drip loss percentage (DLP) and dressing out carcass yield percentage (DCY), were evaluated. Contrasts between groups were obtained by calculated generalized least squares values. Mortality was evaluated by Fisher's exact test due to low numbers in some cells. In the first week, there were significant differences in the growth traits BW, ADG, FI, and FCR, which were superior in control group. These differences may have been due to the origin of the young guinea pigs, which, before weaning, were all raised without fresh citrus pulp, and they were not familiarized with the new supplement. In the second week, treatment group had significantly increased ADG compared with control group, which may have been the result of a process of compensatory growth. During subsequent weeks, no significant differences were observed between animals raised in the two groups. Neither were any significant differences observed across the total fattening period. No significant differences in slaughter traits or mortality rate were observed between animals from the two groups. In conclusion, although there were no significant differences in growth performance, slaughter traits, or mortality, the use of fresh citrus pulp is recommended. Fresh citrus pulp is a by-product of orange juice industry and it is cheap or free. Forage made with fresh citrus pulp could reduce about of 30 % the quantity of alfalfa in guinea pig for meat and as consequence, reduce the production costs.

Keywords: fresh citrus, growth, Guinea pig, mortality

Procedia PDF Downloads 188
884 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems

Authors: Ibram Khalafalla Roshdy Shokry

Abstract:

This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.

Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA

Procedia PDF Downloads 16
883 Exploring the Impact of Mobility-Related Treatments (Drug and Non-Pharmacological) on Independence and Wellbeing in Parkinson’s Disease - A Qualitative Synthesis

Authors: Cameron Wilson, Megan Hanrahan, Katie Brittain, Riona McArdle, Alison Keogh, Lynn Rochester

Abstract:

Background: The loss of mobility and functional dependence is a significant marker in the progression of neurodegenerative diseases such as Parkinson’s Disease (PD). Pharmacological, surgical, and therapeutic treatments are available that can help in the management and amelioration of PD symptoms; however, these only prolong more severe symptoms. Accordingly, ensuring people with PD can maintain independence and a healthy wellbeing are essential in establishing an effective treatment option for those afflicted. Existing literature reviews have examined experiences in engaging with PD treatment options and the impact of PD on independence and wellbeing. Although, the literature fails to explore the influence of treatment options on independence and wellbeing and therefore misses what people value in their treatment. This review is the first that synthesises the impact of mobility-related treatments on independence and wellbeing in people with PD and their carers, offering recommendations to clinical practice and provides a conceptual framework (in development) for future research and practice. Objectives: To explore the impact of mobility-related treatment (both pharmacological and non-pharmacological) on the independence and wellbeing of people with PD and their carers. To propose a conceptual framework to patients, carers and clinicians which captures the qualities people with PD value as part of their treatment. Methods: We performed a critical interpretive synthesis of qualitative evidence, searching six databases for reports that explored the impact of mobility-related treatments (both drug and non-pharmacological) on independence and wellbeing in Parkinson’s Disease. The types of treatments included medication (Levodopa and Amantadine), dance classes, Deep-Brain Stimulation, aquatic therapies, physical rehabilitation, balance training and foetal transplantation. Data was extracted, and quality was assessed using an adapted version of the NICE Quality Appraisal Tool Appendix H before being synthesised according to the critical interpretive synthesis framework and meta-ethnography process. Results: From 2301 records, 28 were eligible. Experiences and impact of treatment pathway on independence and wellbeing was similar across all types of treatments and are described by five inter-related themes: (i) desire to maintain independence, (ii) treatment as a social experience during and after, (iii) medication to strengthen emotional health, (iv) recognising physical capacity and (v) emphasising the personal journey of Parkinson’s treatments. Conclusion: There is a complex and inter-related experience and effect of PD treatments common across all types of treatment. The proposed conceptual framework (in development) provides patients, carers, and clinicians recommendations to personalise the delivery of PD treatment, thereby potentially improving adherence and effectiveness. This work is vital to disseminate as PD treatment transitions from subjective and clinically captured assessments to a more personalised process supplemented using wearable technology.

Keywords: parkinson's disease, medication, treatment, dance, review, healthcare, delivery, levodopa, social, emotional, psychological, personalised healthcare

Procedia PDF Downloads 83
882 Promoting Libraries' Services and Events by Librarians Led Instagram Account: A Case Study on Qatar National Library's Research and Learning Instagram Account

Authors: Maryam Alkhalosi, Ahmad Naddaf, Rana Alani

Abstract:

Qatar National Library has its main accounts on social media, which presents the general image of the library and its daily news. A paper will be presented based on a case study researching the outcome of having a separate Instagram account led by librarians, not the Communication Department of the library. The main purpose of the librarians-led account is to promote librarians’ services and events, such as research consultation, reference questions, community engagement programs, collection marketing, etc. all in the way that librarians think it reflects their role in the community. Librarians had several obstacles to help users understanding librarians' roles. As was noticed that Instagram is the most popular social media platform in Qatar, it was selected to promote how librarians can help users through a focused account to create a direct channel between librarians and users. Which helps librarians understand users’ needs and interests. This research will use a quantitative approach depending on the case study, librarians have used their case in the department of Research and learning to find out the best practices might help in promoting the librarians' services and reaching out to a bigger number of users. Through the descriptive method, this research will describe the changes observed in the numbers of community users who interact with the Instagram account and engaged in librarians’ events. Statistics of this study are based on three main sources: 1. The internal monthly statistics sheet of events and programs held by the Research and Learning Department. 2. The weekly tracking of the Instagram account statistics. 3. Instagram’s tools such as polls, quizzes, questions, etc. This study will show the direct effect of a librarian-led Instagram account on the number of community members who participate and engage in librarian-led programs and services. In addition to highlighting the librarians' role directly with the community members. The study will also show the best practices on Instagram, which helps reaching a wider community of users. This study is important because, in the region, there is a lack of studies focusing on librarianship, especially on contemporary problems and its solution. Besides, there is a lack of understanding of the role of a librarian in the Arab region. The research will also highlight how librarians can help the public and researchers as well. All of these benefits can come through one popular easy channel in social media. From another side, this paper is a chance to share the details of this experience starting from scratch, including the phase of setting the policy and guidelines of managing the social media account, until librarians reached to a point where the benefits of this experience are in reality. This experience had even added many skills to the librarians.

Keywords: librarian’s role, social media, instagram and libraries, promoting libraries’ services

Procedia PDF Downloads 94
881 No-Par Shares Working in European LLCs

Authors: Agnieszka P. Regiec

Abstract:

Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.

Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test

Procedia PDF Downloads 181
880 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers

Authors: Eoin Kirwan, Christopher Nulty, Declan Browne

Abstract:

The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.

Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness

Procedia PDF Downloads 138
879 Development of Alternative Fuels Technologies for Transportation

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.

Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)

Procedia PDF Downloads 177
878 Mechanical Response Investigation of Wafer Probing Test with Vertical Cobra Probe via the Experiment and Transient Dynamic Simulation

Authors: De-Shin Liu, Po-Chun Wen, Zhen-Wei Zhuang, Hsueh-Chih Liu, Pei-Chen Huang

Abstract:

Wafer probing tests play an important role in semiconductor manufacturing procedures in accordance with the yield and reliability requirement of the wafer after the backend-of-the-line process. Accordingly, the stable physical and electrical contact between the probe and the tested wafer during wafer probing is regarded as an essential issue in identifying the known good die. The probe card can be integrated with multiple probe needles, which are classified as vertical, cantilever and micro-electro-mechanical systems type probe selections. Among all potential probe types, the vertical probe has several advantages as compared with other probe types, including maintainability, high probe density and feasibility for high-speed wafer testing. In the present study, the mechanical response of the wafer probing test with the vertical cobra probe on 720 μm thick silicon (Si) substrate with a 1.4 μm thick aluminum (Al) pad is investigated by the experiment and transient dynamic simulation approach. Because the deformation mechanism of the vertical cobra probe is determined by both bending and buckling mechanisms, the stable correlation between contact forces and overdrive (OD) length must be carefully verified. Moreover, the decent OD length with corresponding contact force contributed to piercing the native oxide layer of the Al pad and preventing the probing test-induced damage on the interconnect system. Accordingly, the scratch depth of the Al pad under various OD lengths is estimated by the atomic force microscope (AFM) and simulation work. In the wafer probing test configuration, the contact phenomenon between the probe needle and the tested object introduced large deformation and twisting of mesh gridding, causing the subsequent numerical divergence issue. For this reason, the arbitrary Lagrangian-Eulerian method is utilized in the present simulation work to conquer the aforementioned issue. The analytic results revealed a slight difference when the OD is considered as 40 μm, and the simulated is almost identical to the measured scratch depths of the Al pad under higher OD lengths up to 70 μm. This phenomenon can be attributed to the unstable contact of the probe at low OD length with the scratch depth below 30% of Al pad thickness, and the contact status will be being stable when the scratch depth over 30% of pad thickness. The splash of the Al pad is observed by the AFM, and the splashed Al debris accumulates on a specific side; this phenomenon is successfully simulated in the transient dynamic simulation. Thus, the preferred testing OD lengths are found as 45 μm to 70 μm, and the corresponding scratch depths on the Al pad are represented as 31.4% and 47.1% of Al pad thickness, respectively. The investigation approach demonstrated in this study contributed to analyzing the mechanical response of wafer probing test configuration under large strain conditions and assessed the geometric designs and material selections of probe needles to meet the requirement of high resolution and high-speed wafer-level probing test for thinned wafer application.

Keywords: wafer probing test, vertical probe, probe mark, mechanical response, FEA simulation

Procedia PDF Downloads 51
877 Struggles of Non-Binary People in an Organizational Setting in Iceland

Authors: Kevin Henry

Abstract:

Introduction: This research identifies the main struggles of non-binary people in an organizational setting using the ZMET – method of in-depth interviews. The research was done in Iceland, a country that is repeatedly listed in the top countries for gender equality and found three main categories of non-binary struggles in organizations. These categories can be used to improve organizational non-binary inclusion. Aim: The main questions this paper will answer are: Which unique obstacles are non-binary people facing in their daily organizational life? Which organizational and individual measures help with more inclusion of non-binary people? How can organizational gender equality measures be made more inclusive of non-binary issues? Background: Even though gender equality is a much-researched topic, the struggles of non-binary people are often overlooked in gender equality research. Additionally, non-binary and transgender people are frequently researched together, even though their struggles can be very different. Research focused on non-binary people is, in many cases, done on a more structural or organizational level with quantitative data such as salary or position within an organization. This research focuses on the individual and their struggles with qualitative data to derive measures for non-binary inclusion and equality. Method: An adapted approach of the ZMET-Method (Zaltman Metaphor Elicitation Technique) will be used, during which in-depth interviews are held with individuals, utilizing pictures as a metaphorical starting point to discuss their main thoughts and feelings on being non-binary in an organizational setting. Interviewees prepared five pictures, each representing one key thought or feeling about their organizational life. The interviewer then lets the interviewee describe each picture and asks probing questions to get a deeper understanding of each individual topic. This method helps with a mostly unbiased data collection process by only asking probing questions during the interview and not leading the interviewee in any certain direction. Results: This research has identified three main categories of struggles non-binary people are facing in an organizational setting: internal (personal) struggles, external struggles and structural struggles. Internal struggles refer to struggles that originate from the person themselves (e.g., struggles with their own identity). External struggles refer to struggles from the outside (e.g. harassment from coworkers, exclusion). Structural struggles refer to struggles that are built into the organizational policy or facilities (e.g. restrooms, gendered language). Conclusion: This study shows that there are many struggles for non-binary people in organizations and that even in countries that pride themselves on being progressive and having a high level of gender equality, there is still much to be done for non-binary inclusion. Implications for Organizations: Organizations that strive to improve the inclusion of all genders should pay attention to how their structures are built, how their training is conducted, and how their policies affect people of various genders. Simple changes like making restrooms gender-neutral and using neutral language in company communications are good examples of small structural steps for more inclusion.

Keywords: gender equality, non-binary, organizations, ZMET

Procedia PDF Downloads 39
876 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 191
875 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.

Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery

Procedia PDF Downloads 265
874 Inertial Particle Focusing Dynamics in Trapezoid Straight Microchannels: Application to Continuous Particle Filtration

Authors: Reza Moloudi, Steve Oh, Charles Chun Yang, Majid Ebrahimi Warkiani, May Win Naing

Abstract:

Inertial microfluidics has emerged recently as a promising tool for high-throughput manipulation of particles and cells for a wide range of flow cytometric tasks including cell separation/filtration, cell counting, and mechanical phenotyping. Inertial focusing is profoundly reliant on the cross-sectional shape of the channel and its impacts not only on the shear field but also the wall-effect lift force near the wall region. Despite comprehensive experiments and numerical analysis of the lift forces for rectangular and non-rectangular microchannels (half-circular and triangular cross-section), which all possess planes of symmetry, less effort has been made on the 'flow field structure' of trapezoidal straight microchannels and its effects on inertial focusing. On the other hand, a rectilinear channel with trapezoidal cross-sections breaks down all planes of symmetry. In this study, particle focusing dynamics inside trapezoid straight microchannels was first studied systematically for a broad range of channel Re number (20 < Re < 800). The altered axial velocity profile and consequently new shear force arrangement led to a cross-laterally movement of equilibration toward the longer side wall when the rectangular straight channel was changed to a trapezoid; however, the main lateral focusing started to move backward toward the middle and the shorter side wall, depending on particle clogging ratio (K=a/Hmin, a is particle size), channel aspect ratio (AR=W/Hmin, W is channel width, and Hmin is smaller channel height), and slope of slanted wall, as the channel Reynolds number further increased (Re > 50). Increasing the channel aspect ratio (AR) from 2 to 4 and the slope of slanted wall up to Tan(α)≈0.4 (Tan(α)=(Hlonger-sidewall-Hshorter-sidewall)/W) enhanced the off-center lateral focusing position from the middle of channel cross-section, up to ~20 percent of the channel width. It was found that the focusing point was spoiled near the slanted wall due to the dissymmetry; it mainly focused near the bottom wall or fluctuated between the channel center and the bottom wall, depending on the slanted wall and Re (Re < 100, channel aspect ratio 4:1). Eventually, as a proof of principle, a trapezoidal straight microchannel along with a bifurcation was designed and utilized for continuous filtration of a broader range of particle clogging ratio (0.3 < K < 1) exiting through the longer wall outlet with ~99% efficiency (Re < 100) in comparison to the rectangular straight microchannels (W > H, 0.3 ≤ K < 0.5).

Keywords: cell/particle sorting, filtration, inertial microfluidics, straight microchannel, trapezoid

Procedia PDF Downloads 218
873 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout

Authors: Diamant Irene, Shklarnik Batya

Abstract:

In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.

Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19

Procedia PDF Downloads 188
872 The Correspondence between Self-regulated Learning, Learning Efficiency and Frequency of ICT Use

Authors: Maria David, Tunde A. Tasko, Katalin Hejja-Nagy, Laszlo Dorner

Abstract:

The authors have been concerned with research on learning since 1998. Recently, the focus of our interest is how prevalent use of information and communication technology (ICT) influences students' learning abilities, skills of self-regulated learning and learning efficiency. Nowadays, there are three dominant theories about the psychic effects of ICT use: According to social optimists, modern ICT devices have a positive effect on thinking. As to social pessimists, this effect is rather negative. And, regarding the views of biological optimists, the change is obvious, but these changes can fit into the mankind's evolved neurological system as did writing long ago. Mentality of 'digital natives' differ from that of elder people. They process information coming from the outside world in an other way, and different experiences result in different cerebral conformation. In this regard, researchers report about both positive and negative effects of ICT use. According to several studies, it has a positive effect on cognitive skills, intelligence, school efficiency, development of self-regulated learning, and self-esteem regarding learning. It is also proven, that computers improve skills of visual intelligence such as spacial orientation, iconic skills and visual attention. Among negative effects of frequent ICT use, researchers mention the decrease of critical thinking, as permanent flow of information does not give scope for deeper cognitive processing. Aims of our present study were to uncover developmental characteristics of self-regulated learning in different age groups and to study correlations of learning efficiency, the level of self-regulated learning and frequency of use of computers. Our subjects (N=1600) were primary and secondary school students and university students. We studied four age groups (age 10, 14, 18, 22), 400 subjects of each. We used the following methods: the research team developed a questionnaire for measuring level of self-regulated learning and a questionnaire for measuring ICT use, and we used documentary analysis to gain information about grade point average (GPA) and results of competence-measures. Finally, we used computer tasks to measure cognitive abilities. Data is currently under analysis, but as to our preliminary results, frequent use of computers results in shorter response time regarding every age groups. Our results show that an ordinary extent of ICT use tend to increase reading competence, and had a positive effect on students' abilities, though it didn't show relationship with school marks (GPA). As time passes, GPA gets worse along with the learning material getting more and more difficult. This phenomenon draws attention to the fact that students are unable to switch from guided to independent learning, so it is important to consciously develop skills of self-regulated learning.

Keywords: digital natives, ICT, learning efficiency, reading competence, self-regulated learning

Procedia PDF Downloads 356
871 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches

Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang

Abstract:

Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.

Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis

Procedia PDF Downloads 396
870 The Audiovisual Media as a Metacritical Ludicity Gesture in the Musical-Performatic and Scenic Works of Caetano Veloso and David Bowie

Authors: Paulo Da Silva Quadros

Abstract:

This work aims to point out comparative parameters between the artistic production of two exponents of the contemporary popular culture scene: Caetano Veloso (Brazil) and David Bowie (England). Both Caetano Veloso and David Bowie were pioneers in establishing an aesthetic game between various artistic expressions at the service of the music-visual scene, that is, the conceptual interconnections between several forms of aesthetic processes, such as fine arts, theatre, cinema, poetry, and literature. There are also correlations in their expressive attitudes of art, especially regarding the dialogue between the fields of art and politics (concern with respect to human rights, human dignity, racial issues, tolerance, gender issues, and sexuality, among others); the constant tension and cunning game between market, free expression and critical sense; the sophisticated, playful mechanisms of metalanguage and aesthetic metacritique. Fact is that both of them almost came to cooperate with each other in the 1970s when Caetano was in exile in England, and when both had at the same time the same music producer, who tried to bring them closer, noticing similar aesthetic qualities in both artistic works, which was later glimpsed by some music critics. Among many of the most influential issues in Caetano's and Bowie's game of artistic-aesthetic expression are, for example, the ideas advocated by the sensation of strangeness (Albert Camus), art as transcendence (Friedrich Nietzsche), the deconstruction and reconstruction of auratic reconfiguration of artistic signs (Walter Benjamin and Andy Warhol). For deepen more theoretical issues, the following authors will be used as supportive interpretative references: Hans-Georg Gadamer, Immanuel Kant, Friedrich Schiller, Johan Huizinga. In addition to the aesthetic meanings of Ars Ludens characteristics of the two artists, the following supporting references will be also added: the question of technique (Martin Heidegger), the logic of sense (Gilles Deleuze), art as an event and the sense of the gesture of art ( Maria Teresa Cruz), the society of spectacle (Guy Debord), Verarbeitung and Durcharbeitung (Sigmund Freud), the poetics of interpretation and the sign of relation (Cremilda Medina). The purpose of such interpretative references is to seek to understand, from a cultural reading perspective (cultural semiology), some significant elements in the dynamics of aesthetic and media interconnections of both artists, which made them as some of the most influential interlocutors in contemporary music aesthetic thought, as a playful vivid experience of life and art.

Keywords: Caetano Veloso, David Bowie, music aesthetics, symbolic playfulness, cultural reading

Procedia PDF Downloads 163
869 Keeping Education Non-Confessional While Teaching Children about Religion

Authors: Tünde Puskás, Anita Andersson

Abstract:

This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.

Keywords: non-confessional education, preschool, religion, tradition

Procedia PDF Downloads 158
868 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 194